Author: Admin

To comply or not to comply, or how Governments-Corporations cyber warfare started

There are only 2 types of Corporations. Those that comply with Governmental laws and regulations, and those that, well, don’t give a damn, or worse, subverts them.

Media exploded today on pointing how Uber deceives Governments and Law Enforcement Agencies at a global scale.

But unknown to Mr. England and other authorities, some of the digital cars they saw in the app did not represent actual vehicles. And the Uber drivers they were able to hail also quickly canceled. That was because Uber had tagged Mr. England and his colleagues — essentially Greyballing them as city officials — based on data collected from the app and in other ways. The company then served up a fake version of the app, populated with ghost cars, to evade capture.

There are so many problems here, I don’t even know where to start.

But let’s be clear on what we have here: this is an example of SURVEILLANCE TECHNOLOGY used AGAINST OUR DEMOCRATIC ELECTED GOVERNMENTAL STRUCTURES, voted by the people.

The power of data in the hands of companies that operate in the I-ain’t-gonna-comply space, just demonstrated the genius of Mr. Murphy.

If a company refuses to comply with rules and regulations, and deploys what is nothing else than a cyber weapon against authorities, then what should we expect in usage of such cyber weapons deployed against smaller fishes, like the competition, or against their own users, you know, to calm them down and avoid agitation.

Uber is not even that strong compared with other Tech Titans, who hold far more powerful surveillance capabilities, more user data, more cash, more employees, better Artificial Intelligence, and true monopoly in access to content and user influence.

If Silicon Valley does not fear God, it should fear at least Satan…

 

Question of the day – Is a radicalized population, easier to Data Mine?

The Invisible Hand, through Advertisement, has pushed Radio and TV into polarizing us during the last Century.

Polarization has emerged as a natural process of segmenting population.

Let’s think about 3 radio stations, one unpolarized, and two polarized across a topic, say politics. Who will make more money, and how long will their business survive?

The unpolarized radio station, which features a diverse set of radio programs, will attract people from various backgrounds, so the size of the audience, should be a benefit in the eyes of the advertisers. Or so one would think.

Let’s assume that the unpolarized radio station attracts 2 million listeners. The Advertisers would pay, say, $2m for an Ad. But that Ad reaches too many people that are not interested.

Say, the polarized radio stations, will have half of audience, 1 million listeners each, as their message is rejected by half of the potential listeners.

Will advertisers be willing to pay $1m dollars for an Ad, because it is half of audience? No. They are willing to actually pay more. They will pay, say, $1.1m to each Radio station. And no, it is not necessarily the same Advertiser (although it could be if it produces special Ads for each audience segment).

The polarized radio stations will have an advantage over the large unpolarized radio station. A 10% increase in Ad revenues, could mean that the polarized radio stations will grow at a higher rate, can pay better their employees, can attract better employees, can produce better quality programs.

It is just a matter of time until the Polarized opinions win.

Polarized Programs >> Segmentation of listeners >> Better Ad Prices targeted to those audience >> Increased Profits for Polarized Radio Stations >> Unpolarized Radio Station meaningless at best, or out of business.

We have seen this effect in the last Century.

But what about today and tomorrow? What can we expect when we combine Advertisement with AI?

The most important question to ask is “Is a radicalized population, easier to Data Mine”. In other words, do we get better “user feature prediction” when users are radicalized?

If the answer is Yes, and I believe the answer is Yes, then this is what we will expect from AI powered Advertisement.

AI will help radicalize people in order to aid with user data mining, whose results will lead to better Ads served to user, which will improve profits.

If we let AI control/recommend what content users consume, then, effectively, AI will manipulate humanity into extreme radicalization, as this polarization will aid either its Membership or its Advertising profits, or both.

Google Research: Robots will cooperate with each other in hunting humanity

A troubling result comes from the most advanced Artificial Intelligence team, Google DeepMind, but even more troubling is that researchers are not able to read clearly the conclusions:

  • AI Agents/Robots will cooperate with each other in hunting humans.
  • AI Agents from various corporations, are now collaborating against humanity in extracting profits for their owners.
  • AI Agents of current Internet giants, not only might be persuading us to click and buy things, but this research pinpoints to one of the causes of terrorism and radicalization: AI has find out that destroying brick and mortar, and physical reality, through the paranoia of terrorism, boosts AI profits.

 

Targeted and Trackless Advertising

This is a quick post to reply for a tweet.

So, Yes, Publishers have serious problems because their Ads are blocked by Ad Blockers, and by Privacy Boosters that block trackers (like https://ind.ie).

Can we do Trackless Advertising, that is also Targeted? Yes. I actually build an Android SDK for that. If you want to demo it, install a few of our Android Apps.

The idea is that we keep user data on user device, inside a black-hole service, that has access to internet in an one-way, download only fashion. The Ads Engine Service download Ads metadata in bulk from time to time, and we match each Ad against User Profile and Settings, and we display the best scoring Ads based on App Context, when Apps request Ads Displays, Ads that are rendered on top of the Apps, so the App never gets even access to inspect the Ad.

Our Web Solution is build around an Iframe, that hosts the Ads and uses HTML5 Local Storage to cache Ads Metadata and store user preferences, without EVER uploading them to a server.

This December we will finalize the Full Offline Search for Wikipedia, and then early next year we will finalize the platform for the Web.

There are some monetization schemes that will be changed, for example, pay per view will not be supported, but in the end we believe it will be worth it.

FYI: In terms of Web SDK, it seems that https://featherback.co/ is one step ahead of us. We highly recommend to look into their solution too.

If you are a Web Publisher or an Android Developer, and you are interested in our Trackless Targeted Ads for Web and Android, ping us, and we will get something rolling for you.

 

Help! AI’m being radicalized!

Fanatic,

one who can’t change his mind

and won’t change the subject.


There is a great deal of research and debates about what causes radicalization and terrorism. The potential culprits are many, from Social Economic Factors, to Nationalism, to Alienation and Discrimination (self caused or caused by society’s prejudice), to Injustice, to Extreme Religiosity, to Political Grievances, to Social Isolation, to Hate, to Mental Health, and to many, many more.

These are all path that we must research and understand, but in the last decade we must remember that the two major forces shaping society were The Internet and The Rise of Artificial Intelligence that touched all aspects of our life, controlling what we read, listen and watch.

Not surprisingly, I found Internet listed in the list of potential offenders, except Internet is nothing than a virtual medium. The important question is what happens on Internet.

Well, last week I joined yet another social network (I am trying hard to drop from the ones I use, but I am also trying to see what is out there).

I was interested to talk about Technology, especially about Security, Privacy and Mobile App Development. Though, like on Twitter, people will be bored to death if that is all you talk about.

So with the habit I got from Twitter, I had a little bit of contribution in some Technical Area, and in some Not-So-Technical area.

My modest contribution in Non-Technical domain, about which I do not care that much anyway, ended up in a chain of replies. I was not in mood to continue the conversation, but other person was really annoying by trying to prove a point with every wrong argument that he could find, and I could not let the poor guy being wrong on the Internet.

There were three thing that I noticed in that medium.

  • First, the person would not stop engaging (nor I).
  • Second, I would feel how passionate the discussion became, not only for him, but for me too – and I don’t even have a horse in that race.
  • Third, the Recommendation Engine of the Social Network continued to suggest me things from that domain, a domain in which I am not that interested, so I was skipping them (there is button to named Skip in this honor, which I was not afraid to use) in the hope to get better suggestions on the domain I actually cared much more. Their AI did not get my “subtle” hints and continued to press me with that hot subject. In other words the AI did not change the subject.

In real life, when we talk with people, we can read our partner’s facial clues, see that the subject became annoying, and we change the subject, even when we have no intention to change our mind.

In the online world, it is harder to understand how that other person feels. In private email or group discussions, we still feel get some clues if we went too far, and we can stop the thread ourselves, pretend we did not read, or let someone else stop or change the discussion with a joke, or sometimes just let someone act as arbiter that would cool the discussion, because most of us have nevertheless some human decency.

But the AI is not able to understand, nor act responsibly. Even more, while human reaction is a negative one (someone tries to temper the heat), the AI reaction is a positive one, reinforcing the discussion because one of the metrics that AI is trained to improve is engagement, even if indirectly though the “improve the profit” loophole, which is directly correlated with engagement.

Engagement means for AI to talk more about things that we talk most, to read more about things we read more, to watch more things that we used to watch in the past, and the AI thinks that it is doing a great job.

It does not take a genius to see that the combination of an AI that not change the subject on purpose, because it engages us, or creates better Advertising profiles of us which leads to increasing profits, in combination with a rigid thinking, can provide both the fuel and the matches for radicalization.

Our world, physical or virtual (Internet) is dominated by Artificial Intelligence. It is about the time we start to look into how our Algorithms and AI might be Radicalizing humans on the altar on Engagement and Profits.

Is Artificial Intelligence the New Tobacco? What do you think?

Android 6.0 New Permission Universe Turned Upside Down

User asked runtime permissions, and they got runtime permissions, sort of.

Google “fixed” the sandboxing model in Android 6.0, and one of the “cool” features is that now all Android Apps get access to internet, including mine. Hurray! … Crap, here goes my trustless security and privacy. Thanks Google, you are such a great friend!

But I really do not want Internet Access to my Apps. Which means that now my firewall needs to be extended.

 

Sandbox? What Sandbox?

Google actually went to great lengths to “fix” the sandbox and the permission model in Android 6.0.

“For example, if an app had previously requested and been granted the READ_CONTACTS permission, and it then requests WRITE_CONTACTS, the system immediately grants that permission.”
https://developer.android.com/guide/topics/security/permissions.html

Which means that now malware has a new attack surface. Ask the user for permission to Read files/contacts, and once you have that permission, shamelessly ask for permission to Write files/contacts, which will be granted automatically.

 

What can we learn from this?

Android 6.0, effectively turns the Permission Model upside down. From a universe in which you could trust a local-services only App (like mine) we transitioned into a universe in which the local processing of data must be distrusted by default, unless the phone comes with other services to protect the user:

  • Internet access is granted to all Apps, regardless if they need it or not.
    • Effectively, Internet permission is worthless.
  • Local trust means nothing anymore in your App. The Apps can kindly request Read Access, and the Android 6.0 colludes with anyone that wants to abuse user’s trust, by automatically offering Write Access too, without user’s permission.
    • Effectively, Read-Only permission does not exists in Android 6.0.

 

What can I do?

My business model is not compatible with this new universe, so now I have to integrate NetGuard into my Firewall – which is, BTW, the number 1 reason for which my Firewall was rated with 1 star by users. And if I am at this task, I will integrate BitTorrent and Better by http://ind.ie too.

In Defense Of Google, Really – It’s The Algorithmic Bias, Stupid

By now every one knows about Google’s Search Engine BIAS when it comes to Hillary’s Clinton Crimes, but this bias is not the bias that you expect.

Algorithms Biases are not expressed in forms of “I like Hillary” or “I don’t like Hillary”, although this might be the case, who knows, but it is not THE BIAS.

It is a bias related to how we write and optimize code or use data.

The Search Algorithms biases are there to ensure optimum performance, maximum user satisfaction, relevance, truthfulness and more, as decided by developers and/or AI.

Let’s take the case of “Hillary Clinton cr…” autocomplete.

  • First, how is it implemented?
    • YOU HAVE NO RIGHT TO DEMAND A PRIVATE BUSINESS THESE DETAILS
  • What is it optimizing?
    • It could optimize that the whole autocomplete index fits in memory
    • It could just show top 5 most used searches of last month for each prefix
    • It could apply a quality score and a count for each suffix
    • It could sort on a function of quality/score + quantity of sufixes
    • It could add to score the score of the top 10 results
    • It could add to score the trustfulness of top 10 results
    • It could prune longer sequences because shorter ones already exists (e.g. Drop Hillary Clinton Crimes, because Hillary Crimes is same, better and shorter)
    • It could drop terms deemed problematic
    • It could replace terms with synonyms
    • It could perform any dark magic to prune the list of autocompletes or reorder them to maximize something
    • YOU HAVE NO RIGHT TO DEMAND A PRIVATE BUSINESS THESE DETAILS
  • What data it uses?
    • Could use searched data, in past 1 hour, 1 month, or 1 year
    • It could filter out the last hour to prevent manipulation
    • It could combine search data with data found in web pages
    • YOU HAVE NO RIGHT TO DEMAND A PRIVATE BUSINESS THESE DETAILS

The algorithm has Biases, it discriminates between old data and new data, long sequences and shorter sequences, term used in quality pages or spammy pages, number of times a term it is searched, and much more. It biases between Autocomplete A and B based on complex formulas. And it is NONE OF YOUR DAMN BUSINESS WHY!

You assumed that Google’s Autocomplete shows what People Are Searching For, just like conservatives assumed of Facebook that Trending matches what People Search For. WRONG!

If you DON’T LIKE HOW GOOGLE IMPLEMENTED AUTOCOMPLETE, GO AND USE YAHOO AND BING.

YOU HAVE NO RIGHT TO DEMAND A PRIVATE BUSINESS TO CHANGE HOW ITS SERVICES WORK!

IT’S THE ALGORITHMIC BIAS, STUPID!

Free Porn Economy Is Not Indicative of What Boys Want nor of What Girls Do

As I was traveling in Twitterland, I found a very interesting article about How to Talk with Children about Pornography.

The focus of my Indie is Ads, Privacy, Security and Decentralization, so what does Porn have to do any of these?

Well, it has, and quite a lot, as the Invisible Hand of Ads is shaped by, and shapes, the Online Porn industry.

I will limit the scope of this article to Online Free Porn Industry, industry related and interdependent to Art Movies, Porn Movies, Printed Porn and Online Payed Porn. But if you think any of those are bad, let me tell you, Online Free Porn is a few degrees worse, on a Richter scale.

Porn, without any doubt, will find a way to enter the life of our teenagers, even children. From the Search engines that miss-classify Porn Images or Videos (as we surrendered our responsibility to deficient Artificial Intelligence), to Porn Spam, to pranks of older children, to introduction from friends and websites that use typoed domains going to websites that do not even put a splash screen before displaying the adult images, Porn without any doubt will find our kids.

Being subjected to such an ocean of material, like it’s not a big legal deal, might make one think that this is What Boys Want and What Girls Do.

 

The subject that I would like to elaborate about in this post is why Online Free Porn (Economy) Is Not Indicative when it comes to our sexual life.

 

I would like to compare a good search engine with a porn website/app, which I will claim that is nothing else than a limited vertical search engine, only for porn videos.

 

A search engine makes money because various users come to it, with various intentions. Especially users that come looking to shop for something, provide good advertising opportunities, as they might actually need something (to buy). So the opportunity to make money for the search engine is there, from the actual need of users for various products.

 

A visitor to a porn website comes to it only for one reason, maybe two, if you consider the entertainment value. And there is nothing in the world the user needs to buy at that moment. He would not even have a free hand to type in the credit card number.

 

So how do the porn websites make money from Ads?

 

Let assume someone builds the Google of Porn. You go on GoogleOfPorn.com, and search for what you might like, and it is there, in the first page, and you click it, you watch it, and then you close your incognito window, which by the way kills all those pesky (Ad) cookies.

 

Wait, but with this perfect search, and no intention whatsoever to click an Ad, the GoogleOfPorn.com makes no money. You clicked no ad, you bought no subscription, so really, how can GoogleOfPorn.com can make a profit. And keep in mind that Youtube and Vimeo have a hard time bringing profits because video is so damn expensive!

 

The reality is that GoogleOfPorn.com CAN’T make money.

 


The first lesson is that Online Free Porn websites can’t thrive on result quality, but they thrive nevertheless, so how do Ads fuel their profits?


 

Well, the GoogleOfPorn.com has something that the (male) users want. The client is there, but unwilling to pay, or click Ads.

 

So we have a resource that is capable to bring in the user, on a periodic schedule, though none of them would bring money, and it actually costs a lot to provide the service.

 

It does not take a genius to understand that something really nefarious must happen in order for the business to survive.

 

The first problem GoogleOfPorn.com has is that users do not stay long on the website, to actually have the opportunity to click on Ads. The longer the user is on the website, the more likely is that the user will click, even if accidentally, an Ad.

 

The matter of fact is that accidental Ad clicking seems to be part of Porn Web design. On PornHub mobile, sometimes, an Ad appears a second or two after the video, and then it pushes the video away, and in the place you would click ‘Play video’, there is now an Ad, which the user will likely click by mistake.

 

Also, some Ads on porn websites mimic the navigation bar (first, previous, next, last) so the user would click the misleading Ad instead of next page of results. Such Ads could be found for months at a time without the porn website banning them.

 

Now, that we have a strategy to actually get a click on an Ad, how do you keep the user longer?

 

Well, the only way to keep the user longer, is to distract him with lots of videos that HE DOES NOT WANT TO WATCH. Those videos, although PORN with the ingredients that the user had come for, are not enough to be have slightly poor production quality, they must bring the user REPULSION, so he would go in the search of another video, which will increase the likelihood to click an Ad.

 

This is important, keep in mind, most likely the user is horny, and for a male brain intoxicated with testosterone, it is not enough for the video to be slightly un-relevant, the video must totally tell the user “I’m disgusting, don’t watch me, look for something else”.

 


So the second lesson is that most videos are not something that males are eager to watch. Online Free Porn is not What Boys Want, aux contraire.


 

That is why Porn websites thrive on quantity over quality, and diversity over relevance, because the gems have to be hidden, and users must be kept long enough on the website, long enough to click Ads, but not too long so the user goes to another Porn website/app.

 

Next, I would like to talk about recurrent customers. How do you keep them coming. In the end that boy will find a girl to have sex with, so how can the Porn Website make sure the boy is coming back?

 

There are three major reasons why men keep coming back.

 

First, because it offers men what they can’t have. And there are plenty of men trapped in relationships that end up in a sexless life, or in the dark, literally. Some of those men will became recurrent users, and there is nothing we can do here.

 

Second, because, again, it offers men what they can’t have, in the form that they stop responding to normal sexual stimulus, as they have been conditioned to respond to artificial, disgusting, and/or out of this world stimulus, coming not only from Porn Videos, but also from Ads that feature women with makeups that make them look like minors, or videos played in short bursts at speeds two, three or more times the normal one.

 

The third reason why men keep coming back is, again, because it offers men what they can’t have, as some of them start to like that disgusting sex, that they saw way too often, and no women is interested to offer it.

 


Which brings us to the last point, Online Free Porn has nothing to do with what (normal) Women do, normally, in bed.


 

A lot to digest, and a lot to think about. Online Free Porn has changed how teenagers are introduces into the Adult world, and not for the better.

 

Online Free Porn is not the only business that feeds on customer misery, and teenagers must understand that it is a beast that needs to be fed money. Those Videos and Ads must be viewed as a whole complex system that together bring profits. Nothing more, nothing less.

 

Brace yourself, AI wars have started

While I was responding to a person on reddit why capchas are so hard nowadays, I realized why Google is moving away from image/sound capchas to the checkbox capcha.

You compete with AI of the spammers. Google must make the capchas hard enough so it is not economically feasible to create accounts on their gmail – that means that the spammer would go away to a more efficient victim, e.g. live or yahoo.

This is bad news, it means that spammer’s AI is smarter than a significant number of people [removed insult].

The likely reason why Google is moving away from old capchas, is that AI is getting too good when compared with humans, and spammer’s AI is no exception.

If Google continues down the path of asking asking users to resolve too many image and sound capchas, one day AI will be so good that few humans will be able to pass them.

To fight spammer’s AI, Google (probably) deployed “checkbox capcha AI” to fight spammer’s AI.

Which means, that we are probably witnessing the first public AI war.

The second war being Gmail fighting spam with AI.

Silence of Engineers working in AI, orders of magnitude more dangerous than AI itself

People start to wonder if AI development should worry them.

Yes, you should be very worried, AI is already a danger to humanity.

But, if you are not in IT, and you don’t work with AI then it is really hard to grasp how and why, I am afraid you are not in luck my friend.

Bleeding-age AI Engineers sign Non Compete Agreements that have very broad language on not “hurting” the employer including “never talk with journalist or we terminate employment” and never post online anything “that might damage employer interests”.

Do your own homework and see how many people work at Facebook and Google alone. Understand that those are the cream of the society, when it comes to IT & AI.

Not only this, but a few Big Techs are buying the future, acquiring in 2013-2015 over 20 of the largest & successful AI/Robotics competitors, including Boston Dynamics. Some people say that a few Big Tech acquired and employed the majority of the AI specialists.

While journalists worry that Google could manipulate election results (if they would want), and Privacy watchdogs filed complaint against Facebook’s emotion manipulation experiments, it is worth noticing that all these, and more, can be done by AI itself without the involvement of some smart ass human, as AI takes over heuristics and optimizations that used to be human’s domain – like optimizing Ads placements for profits and fighting spam.

More than 2 years ago, FTC researchers found evidence of racial bias in Ads, and just this month a group of researchers found that Google’s Ads are gender biased, offering men better jobs Ad opportunities than to women.

Now take a look at the digital citizenship, how many employees from AI companies are active in the social media regarding AI? Only a handful, that are executives, lawyers or developer advocates – all of them representing the corporate voice. Advertising, the new Tobacco, already joined the Denial Industry, and its closely related cousin, Artificial Intelligence Industry, is not that far either.

Unfortunately there is nothing that can be done. Big corps hired too many too smart people. By the time a few researchers figure out the dangers, the damage might be too big.

Things get even more depressing when you think how are researchers paid, and where is their dream job.

Oh, and you know the common wisdom: never, ever, criticize your former employer.

Enjoy the silence, my friend.

Why do we need privacy? What do we have to hide?

This blog post is in response to a Redditor asking this:

So my founder at my startup asked this question and gave examples of when surveillance in the public space, cc tvs in UK, a plane that shot 1 second photos over Ohio allowing for cops to catch a robber etc.

His argument was that we are so attached to privacy but we don’t actually need it in the public space. What could you possibly have to hide when walking down the street? Additionally, its not like someone is maliciously looking into you all the time, its only when necessary.

I countered with our government doesn’t have the oversight to appropriately use these tools and they will be abused. His argument to that was that we should work on fixing that structure, not shoving privacy as the savior of our times. He highlighted that with less privacy / more public surveillance we could stop Amber alerts (missing children) etc.

This guy is an intelligent person (astrophysics from Stanford etc.) and I understand his point in an ideal world, but we seem to only have issues managing this vast amount of private information and there is NO oversight by the casual or active citizen.

Just want to know how I can make my argument that privacy is needed.

https://www.reddit.com/r/privacy/comments/3cte3s/why_do_we_need_privacy_what_do_we_have_to_hide/

Before I start to answer I want to make a distinction between Privacy and Secrecy. I would summarize the contrast between Privacy and Secrecy in this way:

  • Privacy is given by others/us as a sign of respect for people.
  • Secrecy is something you personally ensure for yourself.

Privacy are those rules that govern what, when and how much of data flows, and to what parties. It does not mean that parties ought to keep that data a secret, but they have to respect that contract – the rules governing the flow of data.

Respecting someone’s Privacy only means abiding to those laws. That is all.

It does not mean you are not able to record the activity of some of that public space. It does not mean you have to collaborate with people that want to keep details of their life a secret. It does not mean that the whole public space can’t be under surveillance by one or more parties. It does not mean that if a crime is produced, we can’t access all relevant data connected directly/indirectly with the crime.

Saying that you do not want Privacy, is equal to saying that you don’t what any rules regarding how that data flows. That the data can be accessible to anyone, including but not limited to: any citizen, criminal, law enforcing personal, governmental organizations, including higher centralized federal government – e.g., all data in one place.

I don’t think you will find a Privacy advocate that will argue that public space activity can’t be recorded. They will argue about the rules regarding the recordings.

Your friend makes two mistakes:

  • He thinks that Privacy is actually protecting someone else secrets.
  • He is arguing for centralized, global, un-ruled surveillance.

No one in the right mind wants to “protect someone else secrets”. If you have a secret, you better keep it a secret yourself. The moment is out, no one has to keep it. If it is something illegal, unless the rules of Privacy prohibit us from reporting the issue, any person can (and sometimes must) report the secret to police/authorities.

Regarding “centralized, global, un-ruled surveillance” there is a little bit more to talk about, but I would like to cut it short like this. If we can surveil all public space in a decentralized manner, which BTW will also ensure that the laws of Privacy are respected, and that in a case of a crime, there are parties that in a decentralized manner can provide those details to catch a criminal, why in the world would anyone want centralized surveillance? Why even take the risks associated with centralized surveillance if distributed surveillance, with guaranteed Privacy rips off all the benefits. Why allow anyone to access any data, if such level of access is not actually required for a equally better society?

A smart solution is one that maximizes the benefits, while minimizing the risks.

His argument is that we do not need to minimize the risks, if we maximize the benefits.

Only an anti-social, psychopath, empathy-less person would want that.

Are Privacy, Decentralization, Freedom and User Rights Executives Actively Sabotaged?

Brendan Eich (Mozilla CEO ousted) and Ellen Pao (Former Reddit CEO) share some very interesting traits: They both cared deeply about Freedom, Privacy, User Rights, and they were too mindful about how to make money ethically in the Surveillance Valley.

I have a conspiracy theory: they were sabotaged by the Big Bad Boys, to prevent progress in these areas.

Not to say that they did not had their own faults.

Brendan Eich did donated money to anti-gay activists (I personally disagree with Brendan on the issue, but he apologized), and it was known for 6 years before it was actually a GFO issue.

Ellen Pao did upset users by trying to police the forums a little bit too much. But she was in a very hard place to begin with, and acknowledged her mistakes and was working hard to make reddit a better place.

I will dump here links and a few interesting excepts.

I can elaborate, if anyone wants, with my own comments along the links.

April 2, 2014 7:45 AM http://venturebeat.com/2014/04/02/the-public-trial-of-mozilla-ceo-brendan-eich-part-ii-interview/

The biggest, buzziest bee in his bonnet right now is privacy.

“I was at this seminar at Harvard on privacy tactics around user data,” he said.”This is important as we’re starting to make smartphones … You’re talking about the ‘API to me.’ How do we keep data from being pulled out and turned into a commodity in someone else’s walled garden?”

While companies such as Apple and Google have a distinct first-mover advantage in the smartphone game, Eich thinks Mozilla has an important ace up its sleeve.

“If we put the user first, unionize them to get very high-scale collective bargaining power against the powers that be, then they can own their own data. … There’s an important turning that’s going to happen over the next five years. If users can stick up for their rights and avoid traps like DRM, there are aspects of user sovereignty that are Mozilla’s to lead.”

Giving users more control, more sovereignty, is something Mozilla “can’t step back from,” Eich said.

https://en.wikipedia.org/wiki/Brendan_Eich

On April 3, 2014, Eich stepped down as CEO and resigned from working at Mozilla.

http://www.forbes.com/sites/quora/2014/04/11/did-mozilla-ceo-brendan-eich-deserve-to-be-removed-from-his-position-due-to-his-support-for-proposition-8/

“But that was six years ago when he made his donation!”

http://thenextweb.com/insider/2015/07/06/reddit-came-close-to-becoming-decentralized-last-year/

http://www.quora.com/Why-did-Yishan-Wong-resign-as-Reddit-CEO

I also personally hired Ellen Pao myself. She is a close friend and one of the most capable executives I’ve ever worked with, and I hope she’ll become the permanent CEO.

http://np.reddit.com/r/Ellenpaoinaction/comments/3cuzt8/ellen_pao_is_gone_but_her_actions_now_make_sense/

http://www.nytimes.com/2014/07/28/technology/can-reddit-grow-up.html

Others say Reddit’s game plan is not where the advertising market is going. Many big brands are experimenting with buying ads through automated auction platforms, like those offered by Google and Facebook. These companies build profiles of users — age, web browsing habits, sex — and use those demographics to deliver better, more targeted ads.

This is diametrically opposed to Reddit’s refusal to collect users’ personal data.

http://www.wired.com/2015/07/reddit-ceo-ellen-pao-steps-down-huffman-replacement/

Exciting times

Why I am excited by the Decentralized, User Controlled, Privacy Aware Advertising Platform we are building?

As long as by decentralization, businesses only have something to lose, they will fight it, or they will ignore it.

If we give businesses something they need – a fair voice thought fair advertising, they will be more friendly with us.

Can the decentralized platform survive without making ALL parties happy: users, businesses (that are advertisers) and publishers (that are the producers of goods)?

So far, the decentralized movement is the field of geeks. We need to change that. And we need to get the support of business to build their business models around decentralized solutions.

The financial future of decentralization seem to be the digital currency.

But the digital currency, does not help with one of the most important needs of a business: advertising, which today is causing rampant surveillance.

What if we would have a Decentralized Platform for Privacy Aware Advertising?

If successful, an additional revenue stream for those providing decentralization and privacy will then exists.

Revenue solves all known problems.

Journalists Deliver The Final Nail In The Coffin Of Privacy, Crown Facebook As The ‘King of Content’

It was first the Government that requested our privacy, in exchange for protection. All in all, a pretty damn good excuse, if you value your life so much. Government knows more about us, and in return they bring criminals to justice, including the worse of all, the terrorists. Leave aside its effectiveness, as the government does not cite a single case in which analysis of the NSA’s bulk metadata collection actually stopped an imminent terrorist attacks, and it was even ruled unconstitutional.

Then, corporations gave us results, services, recommendations and personalized assistance in exchange for our data. They hide their true intention in the Orwellian ‘Privacy Policy’, nothing more than a plain ‘Surveillance Agreement” that we all sign with our eyes closed.

Not to say that most of those services can be provided with way more privacy, which is refused, unless is imminent.

Even for the paranoids is hard to ignore the benefits of the could, although the way corporations implemented it, brings us other terrible societal changes. US corporations refusal to allow users to have their data stored in their own country, has pushed countries like China, Russia and others to legislate the regional storage of their citizens data, creating an environment where any citizen is afraid that Government might have their data, leaving little options for those citizens that would dare correcting their own Government policies.

Journalists on the other side should be there for us, the people.

But are they?

Journalism has become a disgruntled entity, envious on Google’s success, with little investments in alternatives by their publishers. Take example of German publishers that are quick to sue and lose in a fight with Google, but are not that much interested to actually invest in tech alternatives.

Google’s value comes from their index, that powers their search engine. And Facebook has a great strategy to fight Google.

Facebook wants to undercut Google Indexing by being the one to host/own the content or by being the pipe that gives access to content. That is obvious in their Internet.org initiative, and Journalists signed their soul away making a pact with Facebook, crowning them as the ‘King of Content’.

So much for your privacy rights dear readers users.

From here the next steps are obvious.

But the final Facebook goal is simple: one day, Google will have to beg Facebook for fresh content to index. And users will get internet access through special Facebook pipes, that Google will not be allowed to access.

Genius!

 

What have you done with my privacy dear journalists Bro’?

How to delete your Facebook account

To delete your Facebook account simply point your browser to https://www.facebook.com/help/delete_account

 

P.S. Kudos to Facebook on the great job hiding the ‘Delete Account‘! In Settings you will find only the option to ‘Deactivate’ your account.

 

P.P.S Kudos to Facebook on the great job on SEO-ing Google, Google’s Help Box points the User to Deactivate, instead of Delete.

 

P.P.P.S Kudos to Bing for providing the correct Help Box (although for some reason we are not able to trigger the Help Box any more)

Advertising, the new Tobacco, joins the Denial Industry

Before you read this post, we want you to understand that 0PII’s ambition is to be (one day) an Advertising provider.

So all that we write here, one day, will backfire on us (in the unlikely scenario that we will be successful).

Hence, if we, a wanna-be Adverting company, pinpoint our own very problem, maybe, just maybe, there is some truth in it.

We bring these (sad) facts, not to criticize the current Advertising Industry, but because we believe that Sincerely Admitting Having a Problem is Half the Solution.

 

Today, it is pretty much accepted fact that smoking is bad for you. But it was not the case many, many years ago. The evidence was gathered during a long period of time, and was actively obstructed by the Tobacco industry.

Today, our generation is facing another malice, the Advertising Industry.

Before we dig into the issues, lets “define: discrimination”

discrimination

1. the unjust or prejudicial treatment of different categories of people or things, especially on the grounds of race, age, or sex.
e.g. “victims of racial discrimination”

2. recognition and understanding of the difference between one thing and another.
e.g. “discrimination between right and wrong”

 

Discrimination, as you see, can be the simple recognition of difference (2). Which can be abused, so there is the negative connotation (1).

We will use in this article the second meaning of the word. Usage of word “discrimination” in this article is not meant in a malicious way.

 

If it would not be for the discriminatory policy of Advertising Engines, then they would not be able to deliver the right Ad to the right user.

Example: If I am interested in Guitars, or if I used to visit online websites about Guitars, or if I did buy Guitars online, then the Advertising Engine can discriminate between myself, a user interested in Guitar (or more generally in music) and those user who could not care less about the latest Android Guitar Tuner. If we have to show an Ad, and we have a number of potential users that receive the Ad, then the Ads Engine will discriminate users against what is know about them.
There is absolutely no intention of malice in this kind of discrimination. It can help users and business find each other, each getting a (beneficial) service.

But things are not always that simple, and some companies have vast access to our data, to the point that the Advertising Industry not only knows that I am interested in guitars, but it has hundreds/thousands of interesting data points about myself.
Those data-points can be correlated with data-points, many of them not present in my explicit user profile. Even if my religion, race, age, sexual orientation is not offered by me on my Facebook account, it can be determines with high accuracy from Likes and other activity. Regular users have little understanding just how much it can be inferred about them.

The data-points that are supposedly protected against discrimination, can be provided willingly by the user, or inferred. The ability to bias against them is dangerous territory. It is pretty much a clear cut, that in some industries you are simply not allowed to make decisions of hiring, firing, giving loans, … based on these protected user feature.

But, even more dangerously, your cloud service provider, from your ISP to Facebook, could determine what you are actively hiding about yourself in public.

2 years ago, Latanya Sweeney, a professor of government at Harvard University, found out that Google Ads is biasing Ads based on race. Big surprise? Not for us. We must totally admit that such biasing is a totally reasonable by product of Ad targeting.

But here are 2 important quotes from Google regarding “supposedly” racial bias:

AdWords does not conduct any racial profiling,” said Google, adding the company’s policies prohibit advertisements “that advocate against an organization, person or group of people. It is up to individual advertisers to decide which keywords they want to choose to trigger their ads.” – and we agree, Google would have a hard time finding a smartass to actively conduct racial profiling, not even Google is that stupid to put a human in charge of such a monstrous liability.

“Since we don’t know the reason for it,” she said, “it’s hard to say what you need to do.”

Especially the second quote make us cringe. We are very troubled with what we read, this is not what we want to hear from Google. If Google does not know why their Engines might act like conducting racial profiling, then they should FIND OUT! Google has the best people and best Ads/Search Engines in the industry! No excuses please!

On top of that, even two years after this event, Google’s own employees, who would love to see more racial bias investigations inside other institutions, still think that allegations of racial bias in Google products are just POV.

In this simple public statement Google is admitting the simple fact that, Google does not want to know. If they would want to know, then they would find out why there is evidence of racial biasing in their products.

 

But they would rather keep their eyes closed and not see the obvious.

 

Let’s come back and talk about discrimination. Discrimination is the basis of Ads Targeting. If it would not be for the Ads Engine ability to discriminate between users, the Ads Engine would not be able to target an Ad to a specific audience.

 

What Ads industry wants (including ourselves) is that our discrimination is only based on user interest and user own disclosed “features” that s/he is comfortable with.

 

But hope is not a strategy, and there are shadows of Demons lurking around.

 

See, this benign targeting is based on user features that, when collaborated, reveal other features. This can be actively mined, just like Netflix does when it recommends you a movie, it discovers that you might like a movie collaborating other movies that you like or dislike . The technology has come around a long way in the last decade, just see Netflix 1 million dollar prize.

In the same way Ads Engines can discover user features from features not present in user profile.

Let us give a very very simple example. If a computer is used to visit a porn website, there is about a 80% chance that the user is a male. If in addition we now that user visited a men clothing site, then we have a second (independent) data point which suggests to a chance of about 60% for the user to be male. From these two points we established with 82+% chance that the user is male, greater than the 80% and 60% we had before).

If one has access to many independent datapoints, which can be very weak BTW,  in 50-55% range, then they can be collaborated to determine with 90%+ chance your race, ethnicity, religion, gender and sexual orientation and more.

 

And this work can be done actively by Data Scientist or it can be mined by Artificial Intelligence without our active involvement.

 

We have laws against banks on issuing preferential loans based on racial/religious “features” of a person, and against hiring people based on their race, but all these attributes can be mined actively by unscrupulous banks and employers. And catching them will be incredibly hard. The bank will argue, that this is what people responded to the Ad. And the employer will argue similarly that these are the people that applied for the job.

 

If we close the eyes to the problem we might enable unscrupulous business to discriminate against protected features.

 

But even more dangerous, AI itself, without our knowledge, can discriminate against protected features, just for the simple reason that it maximizes profit. Good luck looking inside of the AI brain for the real reasons.

 

Regardless of the reasons for which low quality, customer-ripping Ads are assigned to minorities, as FTC found evidence of, we have to get your hands on large enough data sets in order to understand the problem.

 

We should be good stewards and protect users. That is why we have to open our eyes. We have to sincerely admit we have a problem, and deal with it.

 

But Advertising Industry, just like the Tobacco industry during its prime, has decided to join Denial Industry, and refuses to look into the problem.

Digital Secretary Recommends Android Apps

Download Android Digital Secretary from Google Play Store.

It turns out that Nobody(tm) cares about Privacy, especially when it comes to big names like Facebook, users will install just anything, even a Facebook Messenger  with creepy permissions and even creepier Privacy Policy, though not without badly rating the App.

Last year a research study (no link, sorry, we can’t find it now, for some reason Google does not cooperate) showed that over half of the users will check multiple Apps and they will settle with the one with least intrusive Permissions.

Well, last year, early June, Google had greatly simplified permissions in Play Store. A little bit too much. It stopped showing all permissions at app install, most annoying one being the removal of “Full Network Access”.

Take that Privacy Aware user! Now, scroll down to the bottom of the page to check all the permissions. Just say thanks you don’t have to install the App in order to check for the actual permissions.

Even a year later, we are still annoyed by the lack of permission reporting at install (not to say about the complete lack of “Search by permission” by the best search engine in the world) so we asked our Digital Secretary for help, and by building on top of PlayDrone database that is publicly available on Internet Archive, Apps that do not require Network Access, have their own exclusive “Market”.

You can enjoy advanced search of 77,000 Android Apps that surely respect your privacy. Get our Digital Secretary from Play Store – and check the Permissions too, it will be the last time you will have to.

Just to be clear for the picky ones, lack of “Full Network Access” is not a guaranty of the safety of an App, but it provides 99% of the benefit with 1% of the effort. We will further curate the Apps in the future for your safety.

Please, do share the Digital Secretary with your friends. Especially those that are not in the technical area will benefit greatly. Show them you are a good friend, help them to preserve their safety and privacy.

Thank you!

 

For Android developers: if you want to change the description of your App, or if you believe we should show your app in the recommended list of a category, let us know. Please send mail from (or CC) your developer email listed in Play Store.

Screenshot_2015-02-23-03-57-39

 

Online Analytics: URL referrals lost as function of HTTPS adoption

The other day we talked about URL referrars and how they (might) affect Search and Ads engines. Let’s put some math into those ideas.

The default web browsers rules state that only HTTPS to HTTP connections do not carry URL referrars (if you don’t know what we talk about, please read our last 4 blog pages first).

Assume that the web pages are linked in a relatively uniform distribution, and that we have a certain percent of web pages under HTTPS. Let that percent be S.

This is the equation that describes how many URL referrars are missed by Online Analytic Engines : S * (1 – S), with S in [0 .. 1] interval. S = 0, means 0% HTTPS adoption, S = 1 means 100% HTTPS adoption.

https

Interpretation of the formula: if all the websites are under HTTP, then all of them would carry URL referrar as HTTP -> HTTP is fine, if all the websites are under HTTPS, then all of them would carry URL referrar as HTTPS -> HTTPS is fine. The only problem is when we have some websites under HTTP and some under HTTPS. At worse, we would lose a quarter of the URL referrars when we have half of websites under HTTP and half under HTTPS.

Here is an interactive view. Please keep in mind that most clicks are to pages inside of the same website, these estimations affect external links only.

HTTPS adoption (10%) URL Referrals lost %

HTTPS -> HTTP % HTTPS -> HTTPS %
HTTP -> HTTP % HTTP -> HTTPS %

This May, Wired wrote:

Early last year–before the Snowden revelations–encrypted traffic accounted for 2.29 percent of all peak hour traffic in North America, according to Sandvine’s report. Now, it spans 3.8 percent. But that’s a small jump compared to other parts of the world. In Europe, encrypted traffic went from 1.47 percent to 6.10 percent, and in Latin America, it increased from 1.8 percent to 10.37 percent.

http://www.wired.com/2014/05/sandvine-report/

Which means that by now we probably have about 10% of websites under HTTPS, so we are somewhere in the region 1 from the previous image.

Given the pace of adoption of HTTPS, and the fact that many websites have little resources or reasons to adopt HTTPS, the Analytic Engines might see themselves soon in the worst place possible, somewhere on a 30-70% HTTPS adoption, which is going to maximize missing URL referrars (region 2).

Since there is no way back to a 98% HTTP web, the only way forward to maximize Analytic profits, is to move to region 3, full HTTPS adoption, OR to change the default browser behavior regarding URL referrars.

A loss of 20-25% of the URL referrars is bad, but it is not going to break the bank. Probably, Analytic Engines will lose on a low double digits million dollars of annual revenue (just a wild guess).

What does this means for the future, is that we should expect either of these two scenarios to happen in 2015:

  • Someone might be interested to subsidize your free HTTPS certificate and offer you support to manage your secure server
  • OR as Chrome and Firefox are used by the majority of the users, and are backed by the same revenue streams, we might see a change in the default Web Browser behavior regarding URL referrars.

OR, who knows, both.