Google Isn’t Just Buying Fitbit, They’re Tracking Your Donut Habit

Spinning Wildly on the Hampster Wheel of the Surveillance Economy

You’re heading to the gym for a workout when you decide to surprise your coworkers with a treat. You search for the nearest bagel shop on your Google Maps app. The app directs you to their closest advertiser, Donut Feel Good?, which is actually a donut shop just short of the bagel place. Your heart pounds from the joy of anticipation — your team will LOVE you (and the sugar rush). 

Just as you’re leaving the donut place, your phone alerts you to a coupon at your favorite coffee shop. “Why not?” you think, as Google nudges your behavior just a bit more. As you bite into your first donut and bask in coworker glory, Google is busy sharing your lack of exercise and poor eating habits with your health insurance company, which also has an app on your phone.  

Welcome to the surveillance economy, where the product is your data.

Acquiring Fitbit Moves Google Out of Your Pocket and Into Your Body 

Thanks to Google’s purchase of Fitbit, Google doesn’t just know your location, your destination and your purchases, it now knows your resting heart rate and increased beats per minute as you anticipate that first donut bite. Google is at the forefront of the surveillance economy — making money by harvesting the digital exhaust we all emit just living our lives. 

Google already has reams of data on our internet searches (Google.com), location data (maps and Android phones), emails and contacts (Gmail), home conversations and digital assistant searches (Google Home), video habits (YouTube), smarthome video footage and thermostat settings (Nest) and document contents (Docs, Sheets, etc.). The sheer volume of our digital exhaust that they’re coalescing, analyzing and selling is phenomenal.

Combine that psychographic and behavioral data with the health data of 28 million Fitbit users, and Google can probably predict when you’ll need to use the toilet. 

Fitbit tracks what users eat, how much they weigh and exercise, the duration and quality of their sleep and their heart rate. With advanced devices, women can log menstrual cycles. Fitbit scales keep track of body mass index and what percentage of a user’s weight is fat. And the app (no device required) tracks all of that, plus blood sugar.  

It’s not a stretch of the imagination to think Fitbit and other health-tracking devices also know your sexual activity and heart irregularities by location (e.g., your heart rate goes up when you pass the Tesla dealership, a car you’ve always wanted). Google wants to get its hands on all that information, and if past behavior is any indicator, they want to sell access to it. 

As Reuters noted, much of Fitbit’s value “may now lie in its health data.”

Can We Trust How Google Uses Our Health Data? 

Regarding the sale, Fitbit said, “Consumer trust is paramount to Fitbit. Strong privacy and security guidelines have been part of Fitbit’s DNA since day one, and this will not change.” 

But can we trust that promise? This is a common tactic of data user policy scope creep: Once we stop paying attention and want to start using our Fitbit again, the company will change its policies and start sharing customer data. They’ll notify us in a multipage email that links to a hundred-page policy that we’ll never read. Even if we do take the time to read it, are we going to be able to give up our Fitbit? We’ve seen this tactic play out again and again with Google, Facebook and a host of other companies.

Google put out its own statement, assuring customers the company would never sell personal information and that Fitbit health and wellness data would not be used in its advertising. The statement said Fitbit customers had the power to review, move or delete their data, but California is the only U.S. state that can require the company to do so by law — under the California Consumer Protection Act, set to go into effect next year. 

Tellingly, Google stopped short of saying the data won’t be used for purposes other than advertising. Nor did they say they won’t categorize you into a genericized buyer’s profile (Overweight, Underfit & Obsessed with Donuts) that can be sold to their partners.

And advertisements are just the tip of the iceberg. Google can use the data for research and to develop health care products, which means it will have an enormous influence on the types of products that are developed, including pharmaceuticals. If that isn’t troubling to you, remember that Google (and big pharma) are in business to make money, not serve the public good. 

Google Has Demonstrated Repeatedly That It Can’t Be Trusted with Our Data

Just this week, we learned that Google has been quietly working with St. Louis-based Ascension, the second-largest health system in the U.S., collecting and aggregating the detailed health information of millions of Americans in 21 states. 

Code-named Project Nightingale, the secret collaboration began last year and, as the Wall Street Journal reported, “The data involved in the initiative encompasses lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, including patient names and dates of birth.”

The Journal also reported that neither the doctors nor patients involved have been notified, and at least 150 Google employees have access to the personal health data of tens of millions of patients. Remarkably, this is all legal under a 1996 law that allows hospitals to share data with business partners without patients’ consent. Google is reportedly using the data to develop software (that uses AI and machine learning) “that zeroes in on individual patients to suggest changes to their care.” It was originally reported that the arrangement is all legal under a 1996 law that allows hospitals to share data with business partners without patients’ consent.

However, the day after the story broke, a federal inquiry was launched into Project Nightingale. The Office for Civil Rights in the Department of Health and Human Services is looking into whether HIPAA protections were fully implemented in accordance with the 1996 law.

Your Health Insurance Could Be at Stake

Likewise, Fitbit has been selling devices to employees through their corporate wellness programs for years and has teamed up with health insurers, including United Healthcare, Humana and Blue Cross Blue Shield

Even if individual data from Fitbit users isn’t shared, Google can use it to deduce all sorts of health trends. It’s also possible that “anonymous” information can be re-identified, meaning data can be matched with individual users. This sets up a scenario where we can be denied health care coverage or charged higher premiums based on data gathered on our eating or exercise habits. 

Now couple that with data on what foods we buy, where we go on vacation and our most recent Google searches, and companies will not only be able to track our behavior, they’ll be able to predict it. This kind of digital profile makes a credit report look quaint by comparison.

Get Off the Hamster Wheel

For the time being, you control many of the inputs that fuel the surveillance economy. You can choose to take off your Fitbit. You can change the default privacy settings on your phone. You can delete apps that track your fitness and health, buy scales that don’t connect to the internet and opt-out of information sharing for the apps and devices you must use. Your greatest tool in the fight for privacy is your intentional use of technology.

In other words, you do have a measure of control over your data. Donut Feel Good?


About Cybersecurity Keynote Speaker John Sileo

John Sileo is the founder and CEO of The Sileo Group, a privacy and cybersecurity think tank, in Lakewood, Colorado, and an award-winning author, keynote speaker, and expert on technology, cybersecurity and tech/life balance.

Disinformation Campaigns Are Coming for Your Bottom Line 

The rise of disinformation campaigns could put the reputation of your company at risk

Imagine waking up to find the internet flooded with fake news that one of your products was killing hordes of people or your company had been implicated in a human trafficking ring. Imagine if there was a deepfake video of you or one of your company executives engaging in criminal activity: purchasing illegal drugs, bribing an official or defrauding the company and its shareholders. 

Welcome to the age of disinformation campaigns.

These types of campaigns are increasingly being used to target businesses and executives. For centuries, they’ve been used as a political tool for one simple reason: They work. There’s ample evidence that Russia manipulated the 2016 presidential election through fake news. In July, a European Commission analysis found that Russia targeted the European parliamentary elections, and just last week, Facebook and Twitter had to take action against China after it orchestrated numerous coordinated social media campaigns to undermine political protests in Hong Kong. 

From Italy to Brazil, Nigeria to Myanmar, governments or individuals are sowing division, discrediting an opponent or swaying an election with false information — often with deadly consequences.

Here at home, there have been numerous disinformation campaigns aimed at politicians and other individuals. Earlier this summer, a video of House Speaker Nancy Pelosi, doctored to make it appear that she was drunk, went viral. Last July, the Conservative Review network (CRTV) posted an interview to Facebook with Congresswoman Alexandria Ocasio-Cortez (who was then a candidate) where she was generally confused and appeared to think Venezuela was in the Middle East. It turned out the “interview” was a mashup of an interview Ocasio-Cortez gave on the show Firing Line spliced with staged questions from CRTV host Allie Stuckey. The post was viewed over a million times within 24 hours and garnered derisive comments from viewers who thought it was real — before Stuckey announced that it was meant as satire. 

Republican politicians have also been targeted (though to a lesser degree). Last year, North Dakota Democrats ran a Facebook ad under a page titled “Hunter Alerts.” The ad warned North Dakotans that they could lose their out-of-state hunting licenses if they voted in the midterm elections, a claim that was unsubstantiated and refuted by the state’s GOP.

Regardless of the targets, disinformation campaigns are designed to leave you wondering what information to trust and who to believe. They succeed when they sow any sense of doubt in your thinking.

The same technology that makes the spread of false information in the political arena so dangerous and effective is now being aimed at the business sector. 

Earlier this year, the Russian network RT America — which was identified as a “principal meddler” in the 2016 presidential election by U.S. intelligence agencies — aired a segment spooking viewers by claiming 5G technology can cause problems like brain cancer and autism. 

There’s no scientific evidence to back up the claims, and many seem to think the success of America’s 5G network is seen as a threat to Russia, which will use every weapon in its arsenal to create doubt and confusion in countries it deems competitors or enemies. 

Whether for political gain (to help elect a U.S. President sympathetic to Russia) or to sabotage technological progress that threatens Russia’s place in the world economic hierarchy (as with 5G), Russia has developed and deployed a sophisticated disinformation machine that can be pointed like a tactical missile at our underlying democratic and capitalistic institutions. 

Economic warfare on a macro level is nothing new, and fake news and “pump and dump” tactics have long been used in stock manipulation. But more and more, individual companies are being targeted simply because the perpetrator has an axe to grind. 

Starbucks was a target in 2017, when a group on the anonymous online bulletin board 4Chan created a fake campaign offering discounted items to undocumented immigrants. Creators of the so-called “Dreamer Day” promotion produced fake ads and the hashtag #borderfreecoffee to lure unsuspecting undocumented Americans to Starbucks. The company took to Twitter to set the record straight after it was targeted in angry tweets.

Tesla, Coca-Cola, Xbox and Costco are among numerous companies or industries that have also been targeted by orchestrated rumors.

The threat to American companies is so severe that earlier this month, Moody’s Investment Services released a report with a dire warning: Disinformation campaigns can harm a company’s reputation and creditworthiness. 

How would you respond to a fake but completely believable viral video of you as a CEO, employee (or even as a parent) admitting to stealing from your clients, promoting white-supremacy or molesting children? The consequences to your reputation, personally and professionally, would be devastating — and often irreparable regardless of the truth behind the claims. As I explored in Deepfakes: When Seeing May Not Be Believing, advances in artificial intelligence and the declining cost of deepfake videos make highly credible imposter videos an immediate and powerful reality. 

Preparing your organization for disinformation attacks is of paramount importance, as your speed of response can make a significant financial and reputational difference. Just as you should develop a Breach Response Plan before cybercriminals penetrate your systems, you would also be wise to create a Disinformation Response Plan that:

  • Outlines your public relations strategy
  • Defines potential client and stakeholder communications 
  • Prepares your social media response
  • Predetermines the legal implications and appropriate response.

Disinformation campaigns are here to stay, and advances in technology will ensure they become more prevalent and believable. That’s why it’s vital that you put a plan in place before you or your company are victimized — because at this point in the game, the only way to fight disinformation is with the immediate release of accurate and credible information. 


About Cybersecurity Keynote Speaker John Sileo

John Sileo is an award-winning author and keynote speaker on cybersecurity, identity theft and tech/life balance. He energizes conferences, corporate trainings and main-stage events by making security fun and engaging. His clients include the Pentagon, Schwab and organizations of all sizes. John got started in cybersecurity when he lost everything, including his $2 million business, to cybercrime. Since then, he has shared his experiences on 60 Minutes, Anderson Cooper, and even while cooking meatballs with Rachel Ray. Contact John directly to see how he can customize his presentations to your audience.

Are Alexa, Google & Siri Eavesdropping on You?

https://www.youtube.com/watch?v=Vw1lQKy16mg’ format=’16-9′ width=’16’ height=’9′ av_uid=’av-1x2tbo’

Amazon and Google have both come out with wildly popular digital assistants that are loosely known as smart speakers. Amazons is called Alexa and Googles is called, well, Google.

“Hey Alexa, would you say you are smarter than Google?”

Apple’s digital assistant is Siri which can be found on all new Apple devices, including the HomePod, a less popular version of Alexa. For the time being, Siri isn’t quite as smart or popular as the other kids, so I’m leaving her out of this conversation for now. Sorry Siri.

Just the fact that Alexa, Google and any digital assistant answer you the minute you mention their name shows that they are ALWAYS LISTENING! Once you have triggered them, they are recording the requests you make just as if you had typed them into a search engine. So they know when you order pizza, what songs you like and what’s on your calendar for the week. They can also have access to your contacts, your location and even combine that information with your buying and surfing habits on their website.  

To be fair, Amazon and Google both say that their digital assistants only process audio after we trigger them with a phrase like “Hey, Alexa” or “OK, Google”. So they aren’t listening to EVERY conversation… YET. Why do I say, YET? Because the New York Times dug a little deeper and took a look at the patents that Amazon and Google are filing for future makeovers of their digital assistants. In one set of patent applications, Amazon describes, and I’m quoting here, a “voice sniffer algorithm” that can analyze audio in realtime when it hears words like “love”, “bought” or “dislike”. It went on to illustrate how a phone call between two friends could result in one receiving an offer for the San Diego Zoo and the other seeing an ad for a Wine club based on the passive conversation that the two of them were having.

In other words, no one had invited Alexa to the conversation, but she, or he, or they were there listening, analyzing and selling your thoughts anyway. That’s just creepy! It gets worse. The Times found another patent application showing how a digital assistant could “determine a speaker’s MOOD using the volume of the user’s voice, detected breathing rate, crying and so forth as well as determine their medical condition based on detected coughing, sneezing and so forth”. And so forth, and so forth. To that, I have only two words: Big Brother!

Let’s call these future digital assistants exactly what they are: audio-based spyware used for profit-making surveillance that treat us users like tasty soundbites at the advertising watering hole. Our private conversations will one-day drive their advertisements, profits and product development. They are data mining what we say, turning it into a quantitative model and selling it to anyone who will buy it. Well, I don’t buy it. And I won’t buy one, until I am sure, in writing, that it’s not eavesdropping on everything said in my home.

Granted, these are all proposed changes to be made in the future, but they are a clear sign of where smart speakers and digital assistants are going. Their intention is to eavesdrop on you. Your One Minute Mission is to ask yourself how comfortable you are having a corporation like Amazon or Google eventually hearing, analyzing and sharing your private conversations.

I have to be forthright with you, many people will say they don’t care, and this really is their choice. We are all allowed to make our own choices when it comes to privacy. But the vitally important distinction here is that you make a choice, an educated, informed choice, and intentionally invite Alexa or Google into your private conversations.

I hope this episode of Sileo On Security has helped you do just that.

Delete Your Facebook After Cambridge Analytica?

I’ve written A LOT about Facebook in the past.

  • What not to post
  • What not to like
  • What not to click on
  • How to keep your kids safe
  • How to keep your data protected
  • How to delete your account

ETC! Search specific topics here.

And personally, I’m ashamed of myself for knowing exactly how social networks like Facebook take advantage of users and our data, and yet still have a Facebook profile. I’m not just sharing my information, Facebook is also sharing everyone of my “friends’” Information through me. I’m currently thinking that the only way to protest this gross misuse is data is to delete my profile (which still won’t purge my historical data, but will stop future leakage).

And yes, I’ve written several times about how Facebook is allowed to sell your privacy.  Now, it turns out the practices I have warned about for years are taking over our headlines with a “little” news bit about how Cambridge Analytica has used data obtained from Facebook to affect the 2016 U.S. Presidential election.

Here’s a brief timeline:

  • In 2014, a Soviet-born researcher and professor, Aleksandr Kogan, developed a “personality quiz” for Facebook.
  • When a user took the quiz, it also granted the app access to scrape his or her profile AND the profiles of any Facebook friends. (Incidentally I was writing about why you shouldn’t take those quizzes right about the time all of this data was being gathered!  And, it was totally legal at that time!)
  • About 270,000 people took the quiz. Between these users and all of their friend connections, the app harvested the data of about 50 million people.
  • This data was then used by Cambridge Analytica to help them target key demographics while working with the Trump campaign during the 2016 presidential election.
  • Facebook learned of this in late 2015 and asked everyone in possession of the data to destroy it. (They did not, however, tell those affected that their data had been harvested.)
  • The company said it did, and Facebook apparently left it at that.

That takes us up to recent days, when The Guardian and The New York Times wrote articles claiming that the firm still has copies of the data and used it to influence the election.

What’s happening now?

  • Facebook has suspended Cambridge Analytica from its platform, banning the company from buying ads or running its Facebook pages.
  • The Justice Department’s special counsel, Robert S. Mueller III, has demanded the emails of Cambridge Analytica employees who worked for the Trump team as part of his investigation into Russian interference in the election.
  • The European Union wants data protection authorities to investigate both Facebook and Cambridge Analytica. The UK’s information commissioner is seeking a warrant to access Cambridge Analytica’s servers.

And what should you be doing?

Consider deleting your profile. I am. I’ve written about how to do that before and how to weigh deactivating your account versus deleting it. Consider carefully before making that choice.

Remember that the real illusion about Facebook is that there is anything significant we can actually do to protect our privacy. Facebook provides an effective privacy checkup tool, but it does nothing to limit the data that Facebook sees, or that Facebook decides to share with organizations willing to buy it, or even that hackers decide to target.

The data you’ve already shared on Facebook, from your profile to your posts and pictures is already lost. There is nothing you can do to protect it now. The only data you can protect is your future data that you choose to not share on Facebook.  Here are my suggestions for a few pro-active steps you can take right now:

  • Delete or deactivate your Facebook profile
  • Reread my post about Facebook Privacy from 2013—unfortunately, all of it still applies today!
  • Memorize this phrase: “Anything I put on Facebook is public, permanent and exploitable.”
  • Tell some little white lies on your profile.
  • And stop taking those quizzes!

John Sileo is an an award-winning author and keynote speaker on cybersecurity, identity theft and online privacy. He specializes in making security entertaining, so that it works. John is CEO of The Sileo Group, whose clients include the Pentagon, Visa, Homeland Security & Pfizer. John’s body of work includes appearances on 60 Minutes, Rachael Ray, Anderson Cooper & Fox Business. Contact him directly on 800.258.8076.

A Smarter Solution for Thief-Proof Passwords

Product Review on Password Manager Software

It often amazes me to find out how many people shy away from implementing ideas that they KNOW will make them safer. There are a multitude of reasons I know:

  • Ignorance: “I didn’t know there was a helmet law in this state.”
  • Fear: “But if I put my money in a bank, there could be a run on it.  It’s safer under my mattress.”
  • Misunderstanding:  “Well, I thought that sign meant I could park here for free on Sunday.”
  • Laziness: “It’ll be okay to leave my laptop on the table while I run to the bathroom real quick.”

I could reel off ideas for literally hours, and every one of these reasons relate directly to not safeguarding your passwords as well. But I want to assure you that it may be THE most important thing you do to secure your data. One of the easiest things anyone can do is utilize a password manager program. There are a lot to choose from but the one I personally recommend is the award-winning 1Password, which remembers and securely encrypts all of your passwords so you don’t have to. You merely come up with one secure master password and then train 1Password to log in to sites for you.

So what exactly are the features of 1password?  There are a LOT!  The best:

  • Strong password generator— a single click gives you a random, extremely strong new password using combinations of hyphens, digits, symbols and mixed cases letters.  No more having to think of (and try to remember!) catchy, unhackable passwords for each account.
  • All these strong passwords are saved within 1Password in a highly protected way, and are ready to be automatically accessed when needed by simply typing one master password that only you know.
  • Ease of use– one click can open your browser, take you to a site, fill in your username and password, and log you in.
  • 1Password can sync your data across all your devices automatically through iCloud and Dropbox, or locally over Wi-Fi where your data never leaves your network.
  • The vault will store your credit cards, reward programs, membership cards, bank accounts, passports, wills, investments, private notes and more.  It has been compared to a 21st-century digital wallet.  (But no one can pickpocket you.)
  • 1Password is one of the few password manager options to allow file attachments, so you can safely store related receipts and images, and it will also keep track of your software licenses.
  • 1Password can show all your items with weak, duplicate, and old passwords so you can decide which ones to fortify and update.  No more using five variations of your childhood dog’s name.  It will look at the strength of each password as well as find duplicate passwords and replace them with strong, unique ones.
  • 1Password is fluent in multiple platforms and browsers, including Mac, Windows, iPhone, iPad, Android, and Windows Phone.
  • If your 1Password vault is in Dropbox or a USB thumb drive, you can decrypt and use it from any traditional computer in the world with a modern browser including Safari, Chrome, Firefox and Opera. This has security implications of its own, which I’ll address in a later post.

The prices vary based on the platform used and license purchased, but the prices are reasonable and worth it!

Fully 50% of the corporations that I work with and speak to have had data breaches due to poor password habits. Surprising, given how many of those would have been avoided had they simply used password manager software like 1Password.

[youtube https://www.youtube.com/watch?v=VgwQPhpRPd0&rel=0]

John Sileo is an an award-winning author and keynote speaker on identity theft, internet privacy, fraud training & technology defense. John specializes in making security entertaining, so that it works. John is CEO of The Sileo Group, whose clients include the Pentagon, Visa, Homeland Security & Pfizer. John’s body of work includes appearances on 60 Minutes, Rachael Ray, Anderson Cooper & Fox Business. Contact him directly on 800.258.8076.

Facebook Privacy Settings Get Needed Update

Facebook Privacy Settings… Some may say it’s too little, too late. I’m relieved that Facebook is finally responding to concerns about their confusing and weak privacy settings.  The social media giant (who has been losing customers of late) has recently made several changes to their settings.

Facebook Privacy Settings Update

  1. Additional photo settings.  Your current profile photo and cover photos have traditionally been public by default. Soon, Facebook will let you change the privacy setting of your old cover photos.
  1. More visible mobile sharing settings.  When you use your mobile phone to post, it is somewhat difficult to find who your audience is because the audience selector has been hidden behind an icon and this could lead to unintended sharing.  In this Facebook privacy settings update, they will move the audience selector to the top of the update status box in a new “To:” field similar to what you see when you compose an email so you’ll be able to see more easily with whom you are sharing.
  1. Default settings for new users.  Instead of automatically defaulting to “public”, new users will now have their default set to “friends”.  They will also be alerted to choose an audience when they post for the first time. This is a significant step in the right direction of a business best practice called Privacy by Default.
  1. Privacy checkup tool.   Users may encounter a “privacy dinosaur” (pictured above) that pops up to lead them through a privacy checkup.  (At this time, it is not a consistent feature: Facebook is “experimenting” with it.) The privacy checkup tool will cover a number of settings, including who they’re posting to, which apps they use, and the privacy of their profile information.
  1. Public posting reminder .  The privacy dinosaur will also remind you when you’re about to post publicly to prevent you from sharing an update with more people than you intended.
  1. Anonymous login.   This feature allows you to log into apps so you don’t have to remember usernames and passwords, but it doesn’t share personal information from Facebook. Traditionally, people using Facebook Login would need to allow the website or app to access certain information in their profiles. I’m also happy to see Facebook moving in this direction, as universal logins are one of the easiest backdoors for cyber criminals to exploit.

Facebook has been criticized for having unreasonably complicated privacy settings, had to pay a $20 million settlement for giving away users’ personal information, and frankly never seemed to care very much about personal privacy.

I’m guessing that Facebook has learned a valuable lesson: that by giving their customers the privacy controls they desire, they are creating happier, more loyal users, which is a long-term strategy for success. The need for change hasn’t disappeared, but these Facebook privacy settings are a step forward.

John Sileo is an an award-winning author and keynote speaker on identity theft, social media privacy, fraud training & technology defense. John specializes in making security entertaining, so that it works. John is CEO of The Sileo Group, whose clients include the Pentagon, Visa, Homeland Security & Pfizer. John’s body of work includes appearances on 60 Minutes, Rachael RayAnderson Cooper & Fox Business. Contact him directly on 800.258.8076.

Do Fitness Apps Share Your Health w/ Others (Insurance Co’s)?

Is your health and fitness app sharing your health score with your insurance company? Do health apps pose privacy risks?

I recently had the opportunity to attend a very informative webinar presented by the Privacy Rights Clearinghouse entitled “Mobile Health and Fitness Apps: What Are the Privacy Risks?”

It was based on a nine-month study on privacy apps that many individuals use to monitor their health, learn about specific medical conditions, and help them achieve personal fitness goals.   Such apps may include those that support diet and exercise programs; pregnancy trackers; behavioral and mental health coaches; symptom checkers that can link users to local health services; sleep and relaxation aids; and personal disease or chronic condition managers.

These apps appeal to a wide range of consumers because they can be beneficial, convenient, and are often free to use.  However, it is clear that there are considerable privacy risks for users – and that the privacy policies (for those apps that have policies) do not describe those risks.

The most common way that these apps invade your privacy is through connecting to third-party sites and services (imagine having your health score shared with insurance companies!).  The idea that these sites do so without informing users seems to be the norm, not the exception.  In fact, more than 75% of free apps and 45% of paid apps use behavioral tracking, often through multiple third-party analytics tools.

Here are some of the key findings of the report:

  • Many apps send data in the clear – unencrypted – without user knowledge.
  • Many apps connect to several third-party sites without user knowledge.
  • Unencrypted connections potentially expose sensitive and embarrassing data to everyone on a network.
  • 72% of the apps assessed presented medium to high risk regarding personal privacy.
  • The apps that presented the lowest privacy risk to users were paid apps.  This is primarily due to the fact that they don’t rely solely on advertising to make money, which means the data is less likely to be available to other parties.

Advice for consumers when using fitness or health apps:

  • Research the app before you download it.
  • Consider using paid apps over free apps if they offer better privacy protections.
  • Make your own assessment of the app’s intrusiveness based on the personal information it asks for in order to use the app.
  • Assume any information you provide to an app may be distributed to the developer, third-party sites the developer uses for functionality, and unidentified third-party marketers and advertisers.
  • Try to limit the personal information you provide, and exercise caution when you share it.  If the app allows it, try the features first without entering personal information.
  • Ask a tech savvy friend to help you determine what information an app is asking for, help you navigate settings, and potentially help you restrict the information an app gathers.
  • If you stop using an app, delete it.  If you have the option, also delete your personal profile and any data archive you’ve created while using the app.

I would hope that mobile app developers would create products with privacy in mind and implement responsible information privacy and security practices.   Until that time, users should assume that everything in an app is sent to the developer and possibly many unidentified third parties, so should only use apps and provide information they feel comfortable sharing.

John Sileo is an author and highly engaging speaker on internet privacy, identity theft and technology security. He is CEO of The Sileo Group, which helps organizations to protect the privacy that drives their profitability. His recent engagements include presentations at The Pentagon, Visa, Homeland Security and Northrop Grumman as well as media appearances on 60 MinutesAnderson Cooper and Fox Business. Contact him directly on 800.258.8076.

Digital Footprint: Exposing Your Secrets, Eroding Your Privacy

Does your digital footprint expose your secrets to the wrong people? 

National Public Radio and the Center for Investigative Reporting recently presented a four part series about privacy (online and off) called, Your Digital Trail. To get the gist of how little privacy you have as a result of the social media, credit cards and mobile technology you use, watch this accurate and eye-opening explanation of how you are constantly being tracked. 
[youtube https://www.youtube.com/watch?v=bqWuioPHhz0]
Marketers, data aggregators, advertisers, the government and even criminals have access to a vivid picture of who you are. NPR calls it your digital trail; for years, I’ve referred to it as your digital footprint. Let’s take quick look of what makes up your digital footprint.

What is your digital footprint? 

Just like a car leaving exhaust as it runs, you leave digital traces of who you are without even knowing it. Here is a partial list of the ways that you are tracked daily: cookies on your computer, apps on your smartphone or tablet, your IP address, internet-enabled devices, search engine terms, mobile phone geo-location, license-plate scanners, email and phone record sniffing, facial recognition systems, online dating profiles, social networking profiles, posts, likes, and shares, mass-transit smart cards, credit card usage, loyalty cards, medical records, music preferences and talk shows you listen to on smartphone apps, ATM withdrawals, wire transfers and the ever-present, always rolling surveillance cameras that tell what subway you rode, what store you shopped in, what street you crossed and at what time. Is there anything, you might ask, that others don’t know about you? Not much.

What happens to your data that is tracked? 

According to NPR, a remarkable amount of your digital trail is available to local law enforcement officers, IRS investigators, the FBI and private attorneys. And in some cases, it can be used against you.

For example, many people don’t know their medical records are available to investigators and private attorneys. According to the NPR story, “Many Americans are under the impression that their medical records are protected by privacy laws, but investigators and private attorneys enjoy special access there.”  In some cases, they don’t even need a search warrant, just a subpoena. In fact, some states consider private attorneys to be officers of the court, so lawyers can issue subpoenas for your phone texts, credit card records, even your digital medical files, despite the HIPAA law.

Kevin Bankston, senior attorney with the nonpartisan Center for Democracy and Technology, explains that the laws that regulate the government regarding privacy were written back in the analog age, so the government often doesn’t have many legal restraints. When the Fourth Amendment guaranteeing our rights to certain privacies was written, our Founding Fathers weren’t thinking about computers and smartphones!

Specifically, the Fourth Amendment states, “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated.”  In the “old days” police would have had to obtain a search warrant (showing probable cause) and search your home for evidence of criminal activity.

But since the 1960’s and 1970’s, the Supreme Court and other courts have consistently ruled that if you have already shared some piece of information with somebody else, a warrant is no longer needed.  So now when you buy something with a credit card (letting your credit card company know what you’ve purchased), or drive through an intersection with license plate scanners (telling law enforcement where you’ve been) or Like something on Facebook (letting the social network and everyone else know your preferences), you have, in essence, given the government (as well as corporations and criminals) the right to gather information about you, whether you are guilty of anything or not.  So much for probable cause.

In this age of cloud computing, the issue becomes even more, well, clouded.  Take the case of a protester arrested during an Occupy Wall Street Demonstration in New York City.  The New York DA subpoenaed all of his tweets over a three and a half month period.  Of course, his lawyer objected, but the judge in the case ruled that the proprietary interests of the tweets belonged to Twitter, Inc., not the defendant!

How can we defend our digital footprint against privacy violations? 

My takeaway from the NPR piece? We are so overwhelmed by the tsunami of privacy erosion going on, by the collection, use and abuse of our digital footprints, that the surveillance economy we have created will only be resolved by broad-stroke, legislative action. Until that happens, corporations, criminals and even our government will consume all of the data we allow them to. And so will we.

John Sileo is an expert on digital footprint and a highly engaging speaker on internet privacy, identity theft and technology. He is CEO of The Sileo Group, which helps organizations to protect the privacy that drives their profitability. His recent engagements include presentations at The Pentagon, Visa, Homeland Security and Northrop Grumman as well as media appearances on 60 Minutes, Anderson Cooper and Fox Business. Contact him directly on 800.258.8076.