Last Updated: 2013-07-13 16:49:36 UTC
by Lenny Zeltser (Version: 1)
What if online scammers weren't sure whether the user account they are targeting is really yours, or whether the information they compiled about you is real? It's worth considering whether decoy online personas might help in the quest to safeguard our digital identities and data.
I believe deception tactics, such as selective and careful use of honeypots, holds promise for defending enterprise IT resources. Some forms of deception could also protect individuals against online scammers and other attackers. This approach might not be quite practical today for most people, but in the future we might find it both necessary and achievable.
Human attackers and malicious software pursue user accounts and data on-line through harvesting, phishing, password-guessing, software vulnerabilities, and various other means. How might we use decoys to confuse, misdirect, slow down and detect adversaries engaged in such activities?
Black-Ops Reputation Management
One example of using deception to control what information is visible about a person online is described the article Scrubbed, written by Graeme Wood. It details the shady practice of "black-ops reputation management." The article discusses one firm's services to drown out negative news about their wealthy clients by publishing flattering, but often misleading or untrue information. The firm builds "white-noise websites, engineered to drown out an ugly signal and designed perhaps less for discerning human readers than for search-engine robots, which organize and curate all the information users discover via search."
These practices are often aimed at bumping negative stories about the client off the first page of search engine results, since few people look beyond the first page when casually looking for information about the person. Reportedly, this "black-ops" service costs $10,000 per month and involves "mining the client's history of publication and philanthropy, then pumping up the volume to drown out all else."
Such services sometimes include the creation of positive, but fake online personas that use the client's name. This way, one could never be certain whether the flattering information referred to the doppelgänger or the real person whose reputation needed a boost.
The article's author "imagined a future in which rich people create dozens of scapegoats for themselves, like Saddam Hussein with his body doubles." I am wondering whether similar techniques could be used for good—to help protect people against online scams and attacks—without being expensive.
Honeypot Email Addresses for Catching Spammers and Misdirecting Attackers
One example of a basic deception practice employed by some individuals today involves using a unique email address for every service with which the person signs up. If the person starts receiving spam on one of those email addresses, he or she can easily determine which service leaked or misused the address.
A more technical variation of this technique, called spamtrap, employs honeypot email addresses that are created for the sole purpose of luring spammers. As the Wikipedia article explains, this address is typically only published in a manner that's not directly visible to humans, so that "an automated e-mail address harvester (used by spammers) can find the email address, but no sender would be encouraged to send messages to the email address for any legitimate purpose."
Given the popularity of email as an initial attack vector, individuals could safeguard themselves by using different email addresses for different online services. Moreover, they could purposefully expose some email addresses that are not used for important, personal communications. Any correspondence sent to these addresses can be assumed to be malicious. An extra step might involve setting up fake social networking profiles for these addresses.
Decoy Social Network Profiles
The wealth of personal details available on social networking sites allows attackers to target individuals using social engineering, secret question-guessing and other techniques. For some examples of such approaches, see The Use of Fake or Fraudulent LinkedIn Profiles and Data Mining Resumes for Computer Attack Reconnaissance.
Setting up one or more fake social network profiles (e.g., on Facebook) that use the person's real name can help the individual deflect the attack or can act as an early warning of an impending attack. A decoy profile could purposefully expose some inaccurate information, while the person's real profile would be more carefully concealed using the site's privacy settings. Decoy profiles would be associated with spamtrap email addresses.
Similarly, the person could expose decoy profiles on other sites, for instance those reveal shopping habits (e.g., Amazon), musings (e.g., Twitter), skills (e.g., GitHub), travel (e.g., TripIt), affections (e.g., Pinterest), music taste (e.g., Pandora) and so on. The person's decoy identities could also have fake resumes available on sites such as Indeed and Monster.com.
Realism of Decoy Online Personas
If decoy online personas become popular, attackers will become more careful about profiling victims to flag fake data. This in itself might be a good thing, because such activities would increase attacker's cost, which some consider a worthy accomplishment for any defender.
With time, the defenders will need to ensure that their doppelgängers appear realistic, for example by demonstrating a steady stream of updates on decoy profiles and activity streams. This could be accomplished and automated with specialized tools. Such tools would build upon the capabilities of social media management utilities like HootSuite. A honepot online persona might even have a phone number and an inbox, provided by tools such as Google Voice. The tools might further add realism to the decoy by automatically responding to the attacker's calls, emails and chat messages.
Using decoys to protect online identities might be an overkill for most people at the moment. However, as attack tactics evolve, employing deception in this manner could be beneficial. As technology matures, so will our ability to establish realistic online personas that deceive our adversaries.
-- Lenny Zeltser
Lenny Zeltser focuses on safeguarding customers' IT operations at NCR Corp. He also teaches how to analyze malware at SANS Institute. Lenny is active on Twitter and Google+. He also writes a security blog.
Last Updated: 2013-07-13 03:17:05 UTC
by Lenny Zeltser (Version: 1)
We experienced a system glitch today, which resultied in repeated notifications being sent out regarding a recently-published diary. We apologize about the inconveninece and are investigating this problem.