#AceNewsReport – May.26: The investigation has “outstanding cross-market significance” due to the breadth of Google’s digital products, Cartel office head Andreas Mundt said:
German antitrust watchdog launches probe into Google: ‘The Federal Cartel Office will investigate the European units of Google in Germany and Ireland and its parent company, Alphabet, in California, it said in a statement’
by French Press Agency – AFP
“Google’s business model is very fundamentally built on the processing of its users’ data,” Mundt said. “Google has a strategic advantage here due to its established access to competitively relevant data.”
A key question in the probe was whether consumers “have sufficient choice over the use of their data by Google if they want to use Google services,” he said.
The investigation follows the application of a new law giving the authorities more power to rein in big tech companies, with similar proceedings launched recently against Amazon and Facebook.
Under the amendment to Germany’s competition law passed in January, the watchdog said it now has more power to “intervene earlier and more effectively” against big tech companies, rather than simply punishing them for abuses of their dominant market position.
The Federal Cartel Office said last week it is examining whether Amazon has “an almost unchallengeable position of economic power,” having already launched two traditional abuse control proceedings.
The watchdog has also employed its new powers to widen the scope of an investigation into Facebook over its integration of virtual reality headsets.
The push to tighten legislation comes as big tech companies are facing increasing scrutiny around the globe, including in the U.S., where Google and Facebook are facing antitrust suits.
#AceNewsReport – Apr.23: It alleges Google “punishes” publishers in its rankings if they don’t sell enough advertising space in its marketplace.
Daily Mail owner sues Google over search results: ‘Associated Newspapers accuses Google of having too much control over online advertising and of downgrading links to its stories, favouring other outlets’
Associated Newspapers’ concerns stem from its assessment that its coverage of the Royal Family in 2021 has been downplayed in search results: For example, it claims that British users searching for broadcaster Piers Morgan’s comments on the Duchess of Sussex following an interview with Oprah Winfrey were more likely to see articles about Morgan produced by smaller, regional outlets: That is despite the Daily Mail writing multiple stories a day about his comments around that time and employing him as a columnist.
Google called the claims “meritless”.
Daily Mail editor emeritus Peter Wright told the BBC’s Today programme that the search engine’s alleged actions were “anti-competitive”.
He suggested that the Daily Mail’s search visibility dropped after using online advertising techniques “which were allowing us to divert advertising traffic away from Google to other ad exchanges, which paid better prices – and this was their punishment”.
“We think it’s time to call this company out,” he said.
The Daily Mail’s MailOnline site is one of the world’s most-read websites. It has 75 million unique monthly visitors in the US alone, according to the lawsuit, which was filed in New York on Tuesday.
A Google spokeswoman said: “The Daily Mail’s claims are completely inaccurate.
“The use of our ad tech tools has no bearing on how a publisher’s website ranks in Google search.
“More generally, we compete in a crowded and competitive ad tech space where publishers have and exercise multiple options. The Daily Mail itself authorises dozens of ad tech companies to sell and manage their ad space, including Amazon, Verizon and more. We will defend ourselves against these meritless claims.”
#AceNewsReport – Apr.01: The community guidelines, as YouTube says, are designed to ensure the video-sharing platform stays protected and set outs what is allowed and not allowed on YouTube. The guidelines apply to all types of content, including videos, comments, links, and thumbnails.
Google renews attack on YouTube account of Iran’s Press TV: ‘We have reviewed your content and found severe or repeated violations of our Community Guidelines. Because of this, we have removed your channel from “YouTube” Google said in a message’
Tuesday, 30 March 2021 11:16 PM [ Last Update: Wednesday, 31 March 2021 8:27 AM ]
Over the past years, the US tech giant has recurrently been opting for such measures against Iranian media outlets. It has taken on Press TV more than any other Iranian outlet given the expanse of its viewership and readership.
The measure comes hot on the heels of another hostile move and aggression on the Iranian media outlets, with Facebook shutting down the page of Press TV news network.
The US-based social media giant informed Press TV on Friday that its account had been shut down for what it claimed to be the Iranian news channel’s failure to “follow our Community Standards.” The page was reinstated a few days later.
How to see everything Google tracks about you and erase it: Here’s what secrets Google knows about you and embarrassing queries aside, there are certain things you should never search using Google for an entirely different reason. It can totally open you up to scams and malware. Tap or click for seven risky search terms to avoid.
First, make sure you’re signed in to your Google account. If you’re using Chrome and see your photo or initial in the top right corner, you’re good to go. Otherwise, go to myaccount.google.com and sign in.
Next, open a new browser tab and search for the term “Google ad settings.”
Click the first result that pops up. This brings you to your ad personalization page. It displays a long list of what Google “knows” about you and topics the company thinks you are most interested in.
You’ll likely see dozens of results. A quick search may show that you are obsessed with the Royal family or even something more obscure like school supplies.
Google’s assumptions aren’t always right. Take my results. Google thinks I don’t have children, male and like heavy metal music. But amid those three strike-outs, Google nailed it with tech, jets, and tea.
Stop ad personalization with a click
Now that the fun (or unsettling) part is over, time to get to work. These private details are compiled from all the searches you’ve done, links you’ve clicked, YouTube videos you’ve watched, articles you’ve read, and more.
Maybe you scanned through your list and were glad to see just how off Google was when it comes to your interests. Or maybe they were a little too on the money for comfort.
You can switch off the ad personalization settings at the top of your Google ad settings page with one easy click. Be sure to click Advanced to expand another box. Here you can allow or prevent Google from using data from “websites and apps that partner with Google” to personalize further what you see across the web.
You can also find out more about why specific details have ended up on your profile.
Click on an interest or demographic to get a pop-up that gives you a bit more information about why it’s part of your profile. Choose “turn off” to delete this demographic entirely, removing the tag from your profile.
Erasing your data
If you toggled ad personalization off, don’t expect to stop seeing ads. It also doesn’t mean you have wiped your data from Google’s databases entirely.
To do that, you need to dive deeper into your Google account settings. We’ve got a step-by-step guide showing you how to erase everything you can.
The first step, of course, is clearing your search history and activity:
Go to myaccount.google.com and log in. Click Manage your Google Account.
Click on Manage your data & personalization, located under Privacy & Personalization.
Under the Activity controls panel, you will see checkmarks next toWeb & App activity tracking, Location History, and YouTube History. Click each one to adjust your settings. You can toggle them off to stop further tracking.
Below Activity controls, click on My Activity under Activity and timeline.
On the menu that appears in the left sidebar, click Delete activity by. Select how far back you would like to delete your history in the pop-up menu. Click Delete to confirm.
#AceSecurityReport – Mar.11: The attackers add an air of legitimacy to the campaign by leveraging a fake Google reCAPTCHA system and top-level domain landing pages that include the logos of victims’ companies:
Fake Google reCAPTCHA Phishing Attack Swipes Office 365 Passwords: According to researchers, at least 2,500 such emails have been unsuccessfully sent to senior-level employees in the banking and IT sector, over the past three months. The emails first take recipients to a fake Google reCAPTCHA system page. Google reCAPTCHA is a servicethat helps protect websites from spam and abuse, by using a Turing test to tell humans and bots apart (through asking a user to click on a fire hydrant out of a series of images, for instance).
March 8, 2021 12:04 pm
A phishing attack targeting Microsoft users leverages a bogus Google reCAPTCHA system.
Microsoft users are being targeted with thousands of phishing emails, in an ongoing attack aiming to steal their Office 365 credentials. The attackers add an air of legitimacy to the campaign by leveraging a fake Google reCAPTCHA system and top-level domain landing pages that include the logos of victims’ companies.
“The attack is notable for its targeted aim at senior business leaders with titles such as Vice President and Managing Director who are likely to have a higher degree of access to sensitive company data,” said researchers with Zscaler’s ThreatLabZ security research team on Friday. “The aim of these campaigns is to steal these victims’ login credentials to allow threat actors access to valuable company assets.”
Fake Phishing Emails: Voicemail Attachments
The phishing emails pretend to be automated emails from victims’ unified communications tools, which say that they have a voicemail attachment. For instance, one email tells users that “(503) ***-6719 has left you a message 35 second(s) long on Jan 20” along with a lone attachment that’s titled “vmail-219.HTM.” Another tells email recipients to “REVIEW SECURE DOCUMENT.”
The phishing email sample. Credit: Zscaler
When the victims click on the attachment, they then encounter the fake Google reCAPTCHA screen, which contains a typical reCAPTCHA box – featuring a checkbox that the user must click that says “I’m not a robot,” which then triggers the turing test.
After filling out the fake reCAPTCHA system, victims are then directed to what appears to be a Microsoft login screen. The login pages also contain different logos from the companies which victims work at – such as one containing a logo from software company ScienceLogic and another from office rental company BizSpace. This reveals that attackers have done their homework and are customizing their phishing landing pages to fit their victims’ profile, in order to make the attack appear more legitimate.
Victims are asked to input their credentials into the system; once they do so, a message tells them that the validation was “successful” and that they are being redirected.
The phishing landing page mimics Microsoft’s login page. Credit: Zscaler
“After giving the login credentials, the phishing campaign will show a fake message that says ‘Validation successful,’” said researchers. “Users are then shown a recording of a voicemail message that they can play, allowing threat actors to avoid suspicion.”
Researchers found a variety of phishing pages associated with the campaign, which were hosted using generic top level domains such as .xyz, .club and .online. These top level domains are typically utilized by cybercriminals in spam and phishing attacks. That’s because they can be purchased for less than $1 each – a low price for adding a level of believability to phishing campaigns.
More Phishing Attacks on Fake Google reCAPTCHA Tactic
Adversaries have been leveraging bogus reCAPTCHA systems in their attacks for years. For instance, in 2019, a malware campaign targeted a Polish bank and its users with emails containing a link to a malicious PHP file, which eventually downloaded the BankBot malware onto victims’ systems. The attackers used a fake Google reCAPTCHA system to seem more realistic.
Both of the above examples show that reCAPTCHA continues to be used in phishing attacks, as the tactic successfully adds legitimacy to the attack: “Similar phishing campaigns utilizing fake Google reCAPTCHAs have been observed for several years, but this specific campaign targeting executives across specific industry verticals started in December 2020,” noted researchers.
Microsoft Office 365 users have faced several sophisticated phishing attacks and scams over the past few months. In October, researchers warned of a phishing campaign that pretends to be an automated message from Microsoft Teams. In reality, the attack aimed to steal Office 365 recipients’ login credentials. Also in October, an Office365 credential-phishing attack targeted the hospitality industry, using visual CAPTCHAs to avoid detection and appear legitimate. Phishing attackers have also adopted new tactics like Google Translate or custom fonts to make the scams seem more legitimate.
#AceNewsReport – Mar.09: The third-party cookie is dying, and Google is trying to create its replacement: No one should mourn the death of the cookie as we know it:
Google’s FLoC Is a Terrible Idea: ‘For more than two decades, the third-party cookie has been the lynchpin in a shadowy, seedy, multi-billion dollar advertising-surveillance industry on the Web; phasing out tracking cookies and other persistent third-party identifiers is long overdue. However, as the foundations shift beneath the advertising industry, its biggest players are determined to land on their feet & Google is leading the charge to replace third-party cookies with a new suite of technologies to target ads on the Web. And some of its proposals show that it hasn’t learned the right lessons from the ongoing backlash to the surveillance business model. This post will focus on one of those proposals, Federated Learning of Cohorts (FLoC), which is perhaps the most ambitious—and potentially the most harmful’
FLoC is meant to be a new way to make your browser do the profiling that third-party trackers used to do themselves: in this case, boiling down your recent browsing activity into a behavioral label, and then sharing it with websites and advertisers. The technology will avoid the privacy risks of third-party cookies, but it will create new ones in the process. It may also exacerbate many of the worst non-privacy problems with behavioral ads, including discrimination and predatory targeting:
Google’s pitch to privacy advocates is that a world with FLoC (and other elements of the “privacy sandbox”) will be better than the world we have today, where data brokers and ad-tech giants track and profile with impunity. But that framing is based on a false premise that we have to choose between “old tracking” and “new tracking.” It’s not either-or. Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads.
We stand at a fork in the road. Behind us is the era of the third-party cookie, perhaps the Web’s biggest mistake. Ahead of us are two possible futures.
In one, users get to decide what information to share with each site they choose to interact with. No one needs to worry that their past browsing will be held against them—or leveraged to manipulate them—when they next open a tab.
In the other, each user’s behavior follows them from site to site as a label, inscrutable at a glance but rich with meaning to those in the know. Their recent history, distilled into a few bits, is “democratized” and shared with dozens of nameless actors that take part in the service of each web page. Users begin every interaction with a confession: here’s what I’ve been up to this week, please treat me accordingly.
Users and advocates must reject FLoC and other misguided attempts to reinvent behavioral targeting. We implore Google to abandon FLoC and redirect its effort towards building a truly user-friendly Web.
What is FLoC?
In 2019, Google presented the Privacy Sandbox, its vision for the future of privacy on the Web. At the center of the project is a suite of cookieless protocols designed to satisfy the myriad use cases that third-party cookies currently provide to advertisers. Google took its proposals to the W3C, the standards-making body for the Web, where they have primarily been discussed in the Web Advertising Business Group, a body made up primarily of ad-tech vendors. In the intervening months, Google and other advertisers have proposed dozens of bird-themed technical standards: PIGIN, TURTLEDOVE, SPARROW, SWAN, SPURFOWL, PELICAN, PARROT… the list goes on. Seriously. Each of the “bird” proposals is designed to perform one of the functions in the targeted advertising ecosystem that is currently done by cookies.
FLoC is designed to help advertisers perform behavioral targeting without third-party cookies. A browser with FLoC enabled would collect information about its user’s browsing habits, then use that information to assign its user to a “cohort” or group. Users with similar browsing habits—for some definition of “similar”—would be grouped into the same cohort. Each user’s browser will share a cohort ID, indicating which group they belong to, with websites and advertisers. According to the proposal, at least a few thousand users should belong to each cohort (though that’s not a guarantee).
If that sounds dense, think of it this way: your FLoC ID will be like a succinct summary of your recent activity on the Web.
Google’s proof of concept used the domains of the sites that each user visited as the basis for grouping people together. It then used an algorithm called SimHash to create the groups. SimHash can be computed locally on each user’s machine, so there’s no need for a central server to collect behavioral data. However, a central administrator could have a role in enforcing privacy guarantees. In order to prevent any cohort from being too small (i.e. too identifying), Google proposes that a central actor could count the number of users assigned each cohort. If any are too small, they can be combined with other, similar cohorts until enough users are represented in each one.
For FLoC to be useful to advertisers, a user’s cohort will necessarily reveal information about their behavior.
One thing that is specified is duration. FLoC cohorts will be re-calculated on a weekly basis, each time using data from the previous week’s browsing. This makes FLoC cohorts less useful as long-term identifiers, but it also makes them more potent measures of how users behave over time.
New privacy problems
FLoC is part of a suite intended to bring targeted ads into a privacy-preserving future. But the core design involves sharing new information with advertisers. Unsurprisingly, this also creates new privacy risks.
The first issue is fingerprinting. Browser fingerprinting is the practice of gathering many discrete pieces of information from a user’s browser to create a unique, stable identifier for that browser. EFF’s Cover Your Tracks project demonstrates how the process works: in a nutshell, the more ways your browser looks or acts different from others’, the easier it is to fingerprint.
Google has promised that the vast majority of FLoC cohorts will comprise thousands of users each, so a cohort ID alone shouldn’t distinguish you from a few thousand other people like you. However, that still gives fingerprinters a massive head start. If a tracker starts with your FLoC cohort, it only has to distinguish your browser from a few thousand others (rather than a few hundred million). In information theoretic terms, FLoC cohorts will contain several bits of entropy—up to 8 bits, in Google’s proof of concept trial. This information is even more potent given that it is unlikely to be correlated with other information that the browser exposes. This will make it much easier for trackers to put together a unique fingerprint for FLoC users.
Google has acknowledged this as a challenge, but has pledged to solve it as part of the broader “Privacy Budget” plan it has to deal with fingerprinting long-term. Solving fingerprinting is an admirable goal, and its proposal is a promising avenue to pursue. But according to the FAQ, that plan is “an early stage proposal and does not yet have a browser implementation.” Meanwhile, Google is set to begin testing FLoC as early as this month.
Fingerprinting is notoriously difficult to stop. Browsers like Safari and Tor have engaged in years-long wars of attrition against trackers, sacrificing large swaths of their own feature sets in order to reduce fingerprinting attack surfaces. Fingerprinting mitigation generally involves trimming away or restricting unnecessary sources of entropy—which is what FLoC is. Google should not create new fingerprinting risks until it’s figured out how to deal with existing ones.
The second problem is less easily explained away: the technology will share new personal data with trackers who can already identify users. For FLoC to be useful to advertisers, a user’s cohort will necessarily reveal information about their behavior.
The project’s Github page addresses this up front:
This API democratizes access to some information about an individual’s general browsing history (and thus, general interests) to any site that opts into it. … Sites that know a person’s PII (e.g., when people sign in using their email address) could record and reveal their cohort. This means that information about an individual’s interests may eventually become public.
As described above, FLoC cohorts shouldn’t work as identifiers by themselves. However, any company able to identify a user in other ways—say, by offering “log in with Google” services to sites around the Internet—will be able to tie the information it learns from FLoC to the user’s profile.
Two categories of information may be exposed in this way:
Specific information about browsing history. Trackers may be able to reverse-engineer the cohort-assignment algorithm to determine that any user who belongs to a specific cohort probably or definitely visited specific sites.
General information about demographics or interests. Observers may learn that in general, members of a specific cohort are substantially likely to be a specific type of person. For example, a particular cohort may over-represent users who are young, female, and Black; another cohort, middle-aged Republican voters; a third, LGBTQ+ youth.
This means every site you visit will have a good idea about what kind of person you are on first contact, without having to do the work of tracking you across the web. Moreover, as your FLoC cohort will update over time, sites that can identify you in other ways will also be able to track how your browsing changes. Remember, a FLoC cohort is nothing more, and nothing less, than a summary of your recent browsing activity.
You should have a right to present different aspects of your identity in different contexts. If you visit a site for medical information, you might trust it with information about your health, but there’s no reason it needs to know what your politics are. Likewise, if you visit a retail website, it shouldn’t need to know whether you’ve recently read up on treatment for depression. FLoC erodes this separation of contexts, and instead presents the same behavioral summary to everyone you interact with.
FLoC is designed to prevent a very specific threat: the kind of individualized profiling that is enabled by cross-context identifiers today. The goal of FLoC and other proposals is to avoid letting trackers access specific pieces of information that they can tie to specific people. As we’ve shown, FLoC may actually help trackers in many contexts. But even if Google is able to iterate on its design and prevent these risks, the harms of targeted advertising are not limited to violations of privacy. FLoC’s core objective is at odds with other civil liberties.
The power to target is the power to discriminate. By definition, targeted ads allow advertisers to reach some kinds of people while excluding others. A targeting system may be used to decide who gets to see job postings or loan offers just as easily as it is to advertise shoes.
Over the years, the machinery of targeted advertising has frequently been used for exploitation, discrimination, and harm. The ability to target people based on ethnicity, religion, gender, age, or ability allows discriminatory ads for jobs, housing, and credit. Targeting based on credit history—or characteristics systematically associated with it— enables predatory ads for high-interest loans. Targeting based on demographics, location, and political affiliation helps purveyors of politically motivated disinformation and voter suppression. All kinds of behavioral targeting increase the risk of convincing scams.
Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads.
Google, Facebook, and many other ad platforms already try to rein in certain uses of their targeting platforms. Google, for example, limits advertisers’ ability to target people in “sensitive interest categories.” However, these efforts frequently fall short; determined actors can usually find workarounds to platform-wide restrictions on certain kinds of targeting or certain kinds of ads.
Even with absolute power over what information can be used to target whom, platforms are too often unable to prevent abuse of their technology. But FLoC will use an unsupervised algorithm to create its clusters. That means that nobody will have direct control over how people are grouped together. Ideally (for advertisers), FLoC will create groups that have meaningful behaviors and interests in common. But online behavior is linked to all kinds of sensitive characteristics—demographicslike gender, ethnicity, age, and income; “big 5” personality traits; even mental health. It is highly likely that FLoC will group users along some of these axes as well. FLoC groupings may also directly reflect visits to websites related to substance abuse, financial hardship, or support for survivors of trauma.
Google has proposed that it can monitor the outputs of the system to check for any correlations with its sensitive categories. If it finds that a particular cohort is too closely related to a particular protected group, the administrative server can choose new parameters for the algorithm and tell users’ browsers to group themselves again.
This solution sounds both orwellian and sisyphean. In order to monitor how FLoC groups correlate with sensitive categories, Google will need to run massive audits using data about users’ race, gender, religion, age, health, and financial status. Whenever it finds a cohort that correlates too strongly along any of those axes, it will have to reconfigure the whole algorithm and try again, hoping that no other “sensitive categories” are implicated in the new version. This is a much more difficult version of the problem it is already trying, and frequently failing, to solve.
In a world with FLoC, it may be more difficult to target users directlybased on age, gender, or income. But it won’t be impossible. Trackers with access to auxiliary information about users will be able to learn what FLoC groupings “mean”—what kinds of people they contain—through observation and experiment. Those who are determined to do so will still be able to discriminate. Moreover, this kind of behavior will be harder for platforms to police than it already is. Advertisers with bad intentions will have plausible deniability—after all, they aren’t directly targeting protected categories, they’re just reaching people based on behavior. And the whole system will be more opaque to users and regulators.
Google, please don’t do this
We wrote about FLoC and the other initial batch of proposals when they were first introduced, calling FLoC “the opposite of privacy-preserving technology.” We hoped that the standards process would shed light on FLoC’s fundamental flaws, causing Google to reconsider pushing it forward. Indeed, several issues on the official Github page raise the exactsameconcerns that we highlight here. However, Google has continued developing the system, leaving the fundamentals nearly unchanged. It has started pitching FLoC to advertisers, boasting that FLoC is a “95% effective” replacement for cookie-based targeting. And starting with Chrome 89, released on March 2, it’s deploying the technology for a trial run. A small portion of Chrome users—still likely millions of people—will be (or have been) assigned to test the new technology.
Make no mistake, if Google does through on its plan to implement FLoC in Chrome, it will likely give everyone involved “options.” The system will probably be opt-in for the advertisers that will benefit from it, and opt-out for the users who stand to be hurt. Google will surely tout this as a step forward for “transparency and user control,” knowing full well that the vast majority of its users will not understand how FLoC works, and that very few will go out of their way to turn it off. It will pat itself on the back for ushering in a new, private era on the Web, free of the evil third-party cookie—the technology that Google helped extend well past its shelf life, making billions of dollars in the process.
It doesn’t have to be that way. The most important parts of the privacy sandbox, like dropping third-party identifiers and fighting fingerprinting, will genuinely change the Web for the better. Google can choose to dismantle the old scaffolding for surveillance without replacing it with something new and uniquely harmful.
We emphatically reject the future of FLoC. That is not the world we want, nor the one users deserve. Google needs to learn the correct lessons from the era of third-party tracking and design its browser to work for users, not for advertisers.
Note: We reached out to Google to verify certain facts presented in this post, as well as to request more information about the upcoming Origin Trial. We have not received a response at the time of posting.
#AceWorldNews – JAPAN – October 10 – A Japanese court has ruled Google had to delete about half of 237 search results appearing after a plaintiff’s name was entered, the Asahi Shimbun newspaper reported as cited by AFP.
#AceWorldNews – BRUSSELS – September 22 – Google could face a record fine for breaching EU competition rules, the European Commission’s competition chief has said, warning that its four year investigation into the US search engine could eventually rival the sixteen years spent investigating software rival Microsoft.
Presenting the Commission’s annual competition report in the European Parliament on Tuesday (23 September), Joaquin Almunia said that he had asked Google “to improve its proposals” or face a formal ‘Statement of Objections’, including a possible fine, if its latest offer did not go “in the right direction”.
Google faces a total of twenty complaints from its rivals, including Microsoft.
“Some of the twenty formal complainants have given us fresh evidence and solid arguments against several aspects of the latest proposals put forward by Google,” Almunia told MEPs.
“We now need to see if Google can address these issues and allay our concerns,” said Almunia, although he noted that “Microsoft was investigated for 16 years, which is four times as much as the Google investigation has taken, and there are more problems with Google than there were with Microsoft.”
English: Google Logo officially released on May 2010 (Photo credit: Wikipedia)
The European Union Court of Justice said that ordinary people can ask Google to remove some sensitive, irrelevant or outdated information from Internet search results.
Earlier, the search engine stated that it does not control search results and bears no responsibility for personal data that is “in open access”. The responsibility lies with the owner of the website that provides the information, and Google merely presents the user with a link.
The case was brought by a Spanish man who complained that an auction notice of his home that could be found on Google infringed upon his privacy.
Around 180 similar complaints have been filed in Spain.
#AceNewsServices – ANKARA – March 31 – Google says Turkey has been intercepting its Internet domain, redirecting users to other sites in the latest battle between Ankara and Web giants.
In a weekend post on Google’s security blog, software engineer Steven Carstensen said the company has received “several credible reports and confirmed with our own research that Google’s Domain Name System (DNS) service has been intercepted by most Turkish ISPs (Internet Service Providers).”
Carstensen said the DNS server “tells your computer the address of a server it’s looking for, in the same way that you might look up a phone number in a phone book.”
“Imagine if someone had changed out your phone book with another one, which looks pretty much the same as before, except that the listings for a few people showed the wrong phone number,” he added.
“That’s essentially what’s happened: Turkish ISPs have set up servers that masquerade as Google’s DNS service.”
Ace Related News: March 30 – Sophos – That’s because the easiest way for a country to get its ISPs to block access to a site like twitter.com is to tell them to stop resolving the name of the site in their DNS servers.
Most home users rely on their ISP for access to DNS, the Domain Name System that turns human-style internet names to computer-style numbers.
For example, when you ask your ISP how to find twitter.com, this is what happens. http://wp.me/p120rT-13WV
#AceSecurityNews – BRAZIL – March 26 – Brazil has scored big for net neutrality after its lower house of Congress approved a ground-breaking post-Snowden bill that protects its users’ privacy rights, albeit with some sacrifices.
The measure did not go as smoothly as could have. To ensure success, President Dilma Rousseff had to let it through at the cost of allowing companies such as Google and Facebook to store user information outside Brazil’s servers.
The real truth is that as before all your GMail and Private Data is still stored on Servers outside these countries and as in the case of Google and Facebook free to be used albeit, they will say with restrictions says Ace News Services.
However, other provisions, which ensured that internet providers gave equal privileges to all web traffic, were left in place. This went ahead despite contrary pleas by big local phone carriers who wanted to continue charging users higher prices for separate content, such as video streaming or Skype-like services.RT
In return for allowing Google and Facebook the freedom not to be bound by Brazilian servers, where local user information was concerned, the bill gets to strengthen legal oversight and punishment for companies not respecting local laws when storing Brazilian user data internationally.
If any transgressions are detected, or data is not made available to law enforcement on request, a company would have to pay a fine equal to 10 percent of its annual earnings from the year before.
#AceSocialNews – FRANCE – March 25 – France’s top consumer rights group has filed a lawsuit in a Paris court against Google+, Facebook and Twitter, accusing the social networks of violating the country’s privacy laws.
UFC-Que Choisir – a group which advises consumers on products, services and their rights – said it was filing a suit in the High Court over “abusive” and “illegal” practices in the conditions of use on the three social networks, AFP reported Tuesday.
However, “they are stubbornly maintaining clauses that the association considers abusive or illegal,” the group was reported as saying.
According to the organization, the instructions were “inaccessible, unreadable and full of hypertext links” – some of which are available only in English.
The watchdog claimed that the social networks “persist in authorizing the widespread collection, modification, preservation and use of the data of users and even of those around them.”
#AceWorldNews – Google has declined Turkey’s requests to remove YouTube videos that allege government corruption, sources told The Wall Street Journal.
Amid a corruption scandal, Turkish Prime Minister Recep Tayyip Erdogan’s government has asked Google in recent weeks to block certain videos from its site in Turkey.
However, Google has reportedly refused to comply given it believes the requests are legally invalid. Turkey blocked the social media site Twitter on Thursday, and the government has threatened to do the same with YouTube and Facebook, as the sites have been prime conduits for corruption allegations.
Related News: Extract – March – 09.02 GMT – #AceWorldNews – Turkey has blocked Twitter hours after embattled Turkish Prime Minister Erdogan threatened to close it down ahead of a key election. It comes after audio recordings purportedly demonstrating corruption among his associates were posted on the site.http://wp.me/p165ui-4rt
“Your email is important to you, and making sure it stays safe and always available is important to us,”Gmail engineering security chief, Nicolas Lidzborski, said in a blog post.
“Starting today, Gmail will always use an encryptedHTTPS connection when you check or send email.
“Today’s change means that no one can listen in on your messages as they go back and forth between you and Gmail’s servers — no matter if you’re using public WiFi or logging in from your computer, phone or tablet.”
The internet giant’s announcement is the latest attempt to bolster the company’s widely used email service and follows a similar step in 2010, when the company made HTTPS the default connection option.
At the time, however, users had the option to turn this protection feature off.
Starting from Friday, Gmail is HTTPS-only.
The move is a response to a disclosure made by National Security Agency (NSA) whistleblower, Edward Snowden, that the agency had been secretly tapping into the main communications links that connect Yahoo and Google data centres around the world.
#AceSecurityNews – The Safe Internet League, Russia’s largest and most reputable organization fighting dangerous web content, considers it necessary to attract highly skilled psychologists and psychiatrists as the popular Google-owned video hosting and sharing service YouTube is developing a special version for kids, Safe Internet League CEO Denis Davydov was quoted as saying by the organization’s press service on Thursday.
Image via CrunchBase
“Leading psychologists and psychiatrists should participate in developing requirements for video content hosted on the so-called child-friendly version of YouTube in order to eliminate the risk of ‘a wolf in sheep’s clothing’,” Davydov said. “Far from all videos that may seem harmless to us are necessarily suitable for children. And specialists’ opinion is essential in this regard.”
Davydov said the Safe Internet League hailed Google’s decision to create a version of its video site aimed specifically at children aged ten and under.
As envisioned by project developers, the site would only show videos deemed safe for this age group, and parents will control access to it. The site would also filter out comments that contain explicit language, or other references to adult content.
“It is very laudable that Google has started demonstrating its willingness to work in Russia, showing respect for the rights of our citizens and taking care of the younger generation of Russians,” Davydov said.
Last September, the Safe Internet League published the results of a full-scale investigation by the League into Google’s activities in Russia. The organization accused Google of “ignoring Russian legal requirements” and “deliberately trying to influence Russian domestic policy in order to promote its services among Russian citizens and officials, in order to undermine digital sovereignty”.
According to reports, YouTube has already approached video producers asking to create suitable content and videos, and it is thought this content would be available exclusively on the site.
The Safe Internet League is a non-commercial organization launched by several major internet providers and aChristian charity.
The declared aim of the group is ridding the Internet of dangerous content through self-regulation in order to prevent government censorship.
He said that 360 million personal account records were obtained in separate attacks, but one single attack seems to have obtained some 105 million records which could make it the biggest single data breach to date, Reuters reports. “The sheer volume is overwhelming,” said Holden in a statement on Tuesday.
“These mind boggling figures are not meant to scare you and they are a product of multiple breaches which we are independently investigating. This is a call to action,” he added.
Hold Security said that as well as 360 million credentials, hackers were also selling 1.25 billion email addresses, which may be of interest to spammers.
The huge treasure trove of personal details includes user names, which are most often email addresses, and passwords, which in most cases are unencrypted.
Hold Security uncovered a similar breach in October last year, but the tens of millions of records had encrypted passwords, which made them much more difficult for hackers to use. “In October 2013, Hold Security identified the biggest ever public disclosure of 153 million stolen credentials from Adobe Systems Inc. One month later we identified another large breach of 42 million credentials from Cupid Media,” Hold Security said in statement.
AFP Photo / Justin Sullivan
Holden said he believes that in many cases the latest theft has yet to be publically reported and that the companies that have been attacked are unaware of it. He added that he will notify the companies concerned as soon as his staff has identified them. “We have staff working around the clock to identify the victims,” he said.
Heather Bearfield, who runs cybersecurity for an accounting firm Marcum LLP, told Reuters that while she had no information about Hold Security’s findings, she believed that it was quite plausible as hackers can do more with stolen credentials than they can with stolen credit cards, as people often use the same login and password for many different accounts.
“They can get access to your actual bank account. That is huge. That is not necessarily recoverable funds,” she said.
The latest revelation by Hold Security comes just months after the US retailer Target announced that 110 million of their customers had their data stolen by hackers. Target and the credit and debit card companies concerned said that consumers do not bear much risk as funds are rapidly refunded in fraud losses.
Was the National Security Agency exploiting two just-discovered security flaws to hack into the iPhones and Apple computers of certain targets? Some skeptic’s are saying there is cause to be concerned about recent coincidences regarding the #NSA and Apple.
Within hours of one another over the weekend, Apple acknowledged that it had discovered critical vulnerabilities in both its iOS and OSXoperating systems that, if exploited correctly, would put thought-to-be-secure communications into the hands of skilled hackers.
“An attacker with a privileged network position may capture or modify data in sessions protected by SSL/TLS,” the company announced.
Apple has since taken steps to supposedly patch up the flaw that affected mobile devices running its iOS operating system, such as iPhones, but has yet to unveil any fix for the OSX used by desktop and laptop computers.
As experts investigated the issue through the weekend, though, many couldn’t help but consider the likelihood — no matter how modicum — that the United States’ secretive spy agency exploited those security flaws to conduct surveillance on targets.
According to a NSA slideshow leaked by Mr. Snowden last June, the US government has since 2007 relied on a program named PRISM that enables the agency to collect data “directly from the servers” of Microsoft, Yahoo, Google, Facebook and others. The most recent addition to that list, however, was Apple, which the NSA said it was only able to exploit using PRISM since October 2012.
“Tracking Your Every Move”
The affected operating system — iOS 6.0 — was released days earlier on September 24, 2012.
These facts, Gruber blogged, “prove nothing” and are “purely circumstantial.” Nevertheless, he wrote, “the shoe fits.”
With the iOS vulnerability being blamed on a single line of erroneous code, Gruber considered a number of possibilities to explain how that happened.
“Conspiratorially, one could suppose the #NSA planted the bug, through an employee mole, perhaps. Innocuously, the Occam’s Razor explanation would be that this was an inadvertent error on the part of an Apple engineer,” he wrote.
“Once the bug was in place, the #NSA wouldn’t even have needed to find it by manually reading the source code. All they would need are automated tests using spoofed certificates that they run against each new release of every OS. Apple releases iOS, the #NSA’s automated spoofed certificate testing finds the vulnerability, and boom, Apple gets ‘added’ to PRISM.”
This was according to the recent post on RT and the fact that in 15 years’ time, computers will surpass their creators in intelligence, with an ability to tell stories and crack jokes, predicts a leading expert in artificial intelligence. Thus, Google will “know the answer to your question before you ask it.”
The fact’s are plain the more we want this so-called `Social Freedom’ the greater chance, it is just around the corner, and we have not yet realised who ,pulls the strings.
Ray Kurzweil (Photo credit: Wikipedia)
Most people would probably agree that computers are man-made technologies that function inside the strict boundaries of man-made borders. For technologists like Google engineering director Ray Kurzweil, however, the moment when computers liberate themselves from their masters will occur in our lifetime.
We are being told daily that computers will make our lives better, even by 2016 , computers and robots will not only have surpassed their makers in terms of raw intelligence, they will understand us better than we understand ourselves, the futurist predicts with enthusiasm.
Kurzweil, 66, is the closest thing to a pop star in the world of artificial intelligence, the place where self-proclaimed geek’s quietly lay the grid work for what could be truly described as a new world order.
My research shows me that every time we broadcast a new invention, the people that would like one day for everyone to obey, are waiting with their ever open cheque-books to buy-up these next steps. That given the right environment they will create, will one day lead to the `Geek’s of Yesteryear’ becoming the controllers of our salvation.