#AceNewsReport – July.05: China’s internet regulator ordered app stores to stop offering Didi’s app on Sunday: It says the firm illegally collected users’ personal data.
CHINA: Didi says removal of app will affect business company after it was removed for illegally collecting ‘personal data’ will strive to rectify any problems, improve its risk prevention awareness and technological capabilities, protect users’ privacy and data security, and continue to provide secure and convenient services to its users,” Didi said in a statement.
It comes just days after the tech giant began selling shares on the New York Stock Exchange: The removal does not affect existing users, but will prevent new users registering on the country’s biggest ride hailing platform.
That came after the Cyberspace Administration of China (CAC) said: “After checks and verification, the Didi Chuxing app was found to be in serious violation of regulations in its collection and use of personal information.”
Two days earlier, the CAC announced it was investigating the firm to protect “national security and the public interest”, prompting Didi’s shares to drop by 5.3%.
Didi gathers vast amounts of real-time data every day. It uses some of the data for autonomous driving technologies and traffic analysis.
Last week, China’s answer to Uber made its debut on the New York Stock Exchange and at the end of Friday’s trading had a market valuation of almost $74.5bn (£53.9bn).
The company raised $4.4bn in the Initial Public Offering (IPO), in what was the biggest listing in the US by a Chinese company since Alibaba’s debut in 2014.
Karishma Vaswani, Asia Presenter: Didi’s troubles are just the start I spoke to Didi Chuxing’s founder Cheng Wei in 2018, the one thing that was apparent was that this was a man on a mission.He wanted to take the Chinese firm global, and to offer a new vision of what a company driven by data could make possible.”We were born in China,” he told me at his offices in Beijing during an interview for the BBC series, Asia’s Tech Titans.”But we hope to be a global company. We hope to be able to solve traffic and transportation problems for the world.”
The huge ambition Cheng Wei displayed to me on the rooftop of his sprawling Beijing campus manifested itself in Didi’s much-anticipated US IPO last week. But the environment in China today is very different from when I spoke to the Didi founder just a few years ago. There’s tighter scrutiny now both inside and outside of China on Chinese tech firms. And Didi’s troubles come against the backdrop of a broader crackdown on Chinese tech by regulators in the country – a crackdown, that some analysts have said, could be politically motivated as Beijing attempts to impose more control on the dynamic sector.
What this means for both Didi and other Chinese tech firms is that this is likely to be just the start of their troubles. For those looking to list in the US – there will be more questions from investors on the regulatory outlook – which could mean a difficult and uncertain time going forward.Didi Chuxing, a platform similar to Uber or Lyft, arranges more than 20 million rides in China every day, on average.
Founded in 2012, it is particularly popular in China’s crowded cities. But it has expanded beyond China into 15 other markets.In June, the company reported revenue of about 42.2bn yuan ($6.52bn)for the three months to the end of March, with the vast majority of that coming from its China mobility business.China has recently moved to tighten up regulation of the country’s large tech firms.
The investigation follows regulatory crackdowns on other tech firms, from Alibaba to food delivery service Meituan.On Monday, the CAC also said that it plans to investigate the Chinese truck-hailing firm Full Truck Alliance (FTA). Like Didi, FTA recently made its New York Stock Exchange debut, raising $1.6bn. It had a market valuation of more than $20bn at the end of trading on Friday.You may also be interested in:How a little Ant became a financial giant
#AceNewsReport – June.24: Editor says decisions are being made by leaders daily and l follow many and agree that some are control of the publics ‘ Freedom of Rights ‘ but as with everything laws exist to control and must not become all controlling as protesting should not become an alienable right for violence for not getting their own way …..as peaceful agreement works and any impasse for self fails at every turn …NEWS & VIEWS welcome in comments thanks for reading and thanks for your understanding
#AceDailyNews – EFF.Org/ says ……Now is the Time To Tell Congress to Ban Federal Use of Face Recognition: And many U.S. cities have done so, including San Francisco and Boston. Now is our chance to end the federal government’s use of this spying technology but in the end will their have to be a trade-off for protecting citizens security
Tell your Senators and Representatives they must pass Facial Recognition and Biometric Technology Moratorium Act, HR 3907/S.2052. It was recently introduced by Senators Edward J. Markey (D-Mass.), Jeff Merkley (D-Ore.), Bernie Sanders (I-Vt.), Elizabeth Warren (D-Mass.), and Ron Wyden (D-Ore.), and by Representatives Pramila Jayapal (WA-07), Ayanna Pressley (MA-07) and Rashida Tlaib (MI-13).
This important bill would be a critical step to ensuring that mass surveillance systems don’t use your face to track, identify, or harm you. The bill would ban the use of face surveillance by the federal government, as well as withhold certain federal funds for local and state governments that use the technology:
#AceNewsReport – June.23: Sometimes, it can be a bit confusing figuring out whether an item is ok to bring on a plane or whether it needs to be put in a checked bag. According to the TSA, some travelers in Boston attempted to bring a variety of sharp weapons on a plane, including at least one throwing star.
The TSA New England Twitter account shared images of the confiscated items, which also includes a variety of knives. The tweet doesn’t mention whether or not these items all came from one group of travelers or not.
The tweet states, “Let’s get straight to the point…these items are not allowed in your carry-on bag. Some passengers found that out this weekend when TSA officers at Boston Logan Airport detected these sharp objects. Sheathe these items and put them in your checked bag please!”
These are not the only unusual weapons recently discovered by TSA officers.
Authorities arrested a man at Newark Liberty International Airport when he was caught attempting to sneak a gun past security, NJ.com reports. According to the news outlet, the small weapon was packed in a case within the man’s suitcase and was reportedly designed to look like a belt buckle.
Unfortunately, it was discovered that the gun was functional. The man reportedly claimed that he had forgotten that he had packed the item in his luggage.
In a statement obtained by NJ.com, TSA Federal Security Director for New Jersey Thomas Carter said, “Claiming to forget that you have a gun with you is inexcusable. If you own a gun you need to know where it is at all times. Each of these individuals now faces a stiff federal financial penalty that could cost them thousands of dollars.”
#AceNewsReport – June.15: Gardaí will also be required to make a written record of a stop and search: This will enable data to be collected so the effectiveness and use of the powers can be assessed.
BELFAST: Irish police to be given powers over passwords: The change is part of the Garda Síochána Bill published by Irish Justice Minister Heather Humphreys on Monday.
20 hours ago
Special measures will be introduced for suspects who are children and suspects who may have impaired capacity.
The bill will bring in longer detention periods for the investigation of multiple offences being investigated together, for a maximum of up to 48 hours.
It will also allow for a week’s detention for suspects in human trafficking offences, which are currently subject to a maximum of 24 hours detention.
‘Powers and safeguards’
“The law in this area is currently very complex, spread across the common law, hundreds of pieces of legislation, constitutional and EU law,” the minister said.
“Bringing it together will make the use of police powers by gardaí clear, transparent and accessible.
“The aim is to create a system that is both clear and straightforward for gardaí to use and easy for people to understand what powers gardaí can use and what their rights are in those circumstances.
“At the same time, where we are proposing to extend additional powers to gardaí, we are also strengthening safeguards. The bill will have a strong focus on the fundamental rights and procedural rights of the accused.
“I believe this will maintain the crucial balance which is key to our criminal justice system, while ensuring greater clarity and streamlining of Garda powers.”
#AceNewsReport – June.12: Hong Kong’s thriving film industry had previously enjoyed freedoms not seen on the mainland:
HONG KONG: Chinese officials to censor films that ‘endanger national security’ The order also instructs censors to prevent and suppress acts that do not uphold the sovereignty and territorial integrity of China.
The Film Censorship Authority should stay “vigilant to the portrayal, depiction or treatment of any act or activity which may amount to an offence endangering national security”, the government said in a statement as protest singers fear for their future
“Any content of a film which is objectively and reasonably capable of being perceived as endorsing, supporting, promoting, such act or activity” will be censored, according to the guidelines.
It also cites “the common responsibility of the people of Hong Kong to safeguard the sovereignty, unification and territorial integrity of the People’s Republic of China.”
Films go through strict censors on the Chinese mainland and only a select few Western films or documentaries are released commercially each year. Historically Hong Kong has taken a far more liberal approach.
The order has been met with anger and sadness by many on social media who say it will curtail artistic expression.
What’s the background? ……….The former British colony was handed back to China in 1997 under a model called “one country, two systems”.Under the deal, which gave the territory freedoms not available in mainland China, Hong Kong also had its own mini-constitution and an elected parliament.
These freedoms are enshrined in Hong Kong’s mini-constitution, the Basic Law, which was meant to last until 2047.From protests to ‘patriots’: Why China is bent on crushing Hong Kong dissentBut fears that this model was being eroded led to huge pro-democracy protests in 2019.
Some protests turned violent and in 2020, China introduced the national security law in the territory: Beijing said the law would target “sedition” and bring stability. Since the law was enacted in June last year, around 100 people have been arrested:
#AceNewsReport – Mar.09: The third-party cookie is dying, and Google is trying to create its replacement: No one should mourn the death of the cookie as we know it:
Google’s FLoC Is a Terrible Idea: ‘For more than two decades, the third-party cookie has been the lynchpin in a shadowy, seedy, multi-billion dollar advertising-surveillance industry on the Web; phasing out tracking cookies and other persistent third-party identifiers is long overdue. However, as the foundations shift beneath the advertising industry, its biggest players are determined to land on their feet & Google is leading the charge to replace third-party cookies with a new suite of technologies to target ads on the Web. And some of its proposals show that it hasn’t learned the right lessons from the ongoing backlash to the surveillance business model. This post will focus on one of those proposals, Federated Learning of Cohorts (FLoC), which is perhaps the most ambitious—and potentially the most harmful’
FLoC is meant to be a new way to make your browser do the profiling that third-party trackers used to do themselves: in this case, boiling down your recent browsing activity into a behavioral label, and then sharing it with websites and advertisers. The technology will avoid the privacy risks of third-party cookies, but it will create new ones in the process. It may also exacerbate many of the worst non-privacy problems with behavioral ads, including discrimination and predatory targeting:
Google’s pitch to privacy advocates is that a world with FLoC (and other elements of the “privacy sandbox”) will be better than the world we have today, where data brokers and ad-tech giants track and profile with impunity. But that framing is based on a false premise that we have to choose between “old tracking” and “new tracking.” It’s not either-or. Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads.
We stand at a fork in the road. Behind us is the era of the third-party cookie, perhaps the Web’s biggest mistake. Ahead of us are two possible futures.
In one, users get to decide what information to share with each site they choose to interact with. No one needs to worry that their past browsing will be held against them—or leveraged to manipulate them—when they next open a tab.
In the other, each user’s behavior follows them from site to site as a label, inscrutable at a glance but rich with meaning to those in the know. Their recent history, distilled into a few bits, is “democratized” and shared with dozens of nameless actors that take part in the service of each web page. Users begin every interaction with a confession: here’s what I’ve been up to this week, please treat me accordingly.
Users and advocates must reject FLoC and other misguided attempts to reinvent behavioral targeting. We implore Google to abandon FLoC and redirect its effort towards building a truly user-friendly Web.
What is FLoC?
In 2019, Google presented the Privacy Sandbox, its vision for the future of privacy on the Web. At the center of the project is a suite of cookieless protocols designed to satisfy the myriad use cases that third-party cookies currently provide to advertisers. Google took its proposals to the W3C, the standards-making body for the Web, where they have primarily been discussed in the Web Advertising Business Group, a body made up primarily of ad-tech vendors. In the intervening months, Google and other advertisers have proposed dozens of bird-themed technical standards: PIGIN, TURTLEDOVE, SPARROW, SWAN, SPURFOWL, PELICAN, PARROT… the list goes on. Seriously. Each of the “bird” proposals is designed to perform one of the functions in the targeted advertising ecosystem that is currently done by cookies.
FLoC is designed to help advertisers perform behavioral targeting without third-party cookies. A browser with FLoC enabled would collect information about its user’s browsing habits, then use that information to assign its user to a “cohort” or group. Users with similar browsing habits—for some definition of “similar”—would be grouped into the same cohort. Each user’s browser will share a cohort ID, indicating which group they belong to, with websites and advertisers. According to the proposal, at least a few thousand users should belong to each cohort (though that’s not a guarantee).
If that sounds dense, think of it this way: your FLoC ID will be like a succinct summary of your recent activity on the Web.
Google’s proof of concept used the domains of the sites that each user visited as the basis for grouping people together. It then used an algorithm called SimHash to create the groups. SimHash can be computed locally on each user’s machine, so there’s no need for a central server to collect behavioral data. However, a central administrator could have a role in enforcing privacy guarantees. In order to prevent any cohort from being too small (i.e. too identifying), Google proposes that a central actor could count the number of users assigned each cohort. If any are too small, they can be combined with other, similar cohorts until enough users are represented in each one.
For FLoC to be useful to advertisers, a user’s cohort will necessarily reveal information about their behavior.
One thing that is specified is duration. FLoC cohorts will be re-calculated on a weekly basis, each time using data from the previous week’s browsing. This makes FLoC cohorts less useful as long-term identifiers, but it also makes them more potent measures of how users behave over time.
New privacy problems
FLoC is part of a suite intended to bring targeted ads into a privacy-preserving future. But the core design involves sharing new information with advertisers. Unsurprisingly, this also creates new privacy risks.
The first issue is fingerprinting. Browser fingerprinting is the practice of gathering many discrete pieces of information from a user’s browser to create a unique, stable identifier for that browser. EFF’s Cover Your Tracks project demonstrates how the process works: in a nutshell, the more ways your browser looks or acts different from others’, the easier it is to fingerprint.
Google has promised that the vast majority of FLoC cohorts will comprise thousands of users each, so a cohort ID alone shouldn’t distinguish you from a few thousand other people like you. However, that still gives fingerprinters a massive head start. If a tracker starts with your FLoC cohort, it only has to distinguish your browser from a few thousand others (rather than a few hundred million). In information theoretic terms, FLoC cohorts will contain several bits of entropy—up to 8 bits, in Google’s proof of concept trial. This information is even more potent given that it is unlikely to be correlated with other information that the browser exposes. This will make it much easier for trackers to put together a unique fingerprint for FLoC users.
Google has acknowledged this as a challenge, but has pledged to solve it as part of the broader “Privacy Budget” plan it has to deal with fingerprinting long-term. Solving fingerprinting is an admirable goal, and its proposal is a promising avenue to pursue. But according to the FAQ, that plan is “an early stage proposal and does not yet have a browser implementation.” Meanwhile, Google is set to begin testing FLoC as early as this month.
Fingerprinting is notoriously difficult to stop. Browsers like Safari and Tor have engaged in years-long wars of attrition against trackers, sacrificing large swaths of their own feature sets in order to reduce fingerprinting attack surfaces. Fingerprinting mitigation generally involves trimming away or restricting unnecessary sources of entropy—which is what FLoC is. Google should not create new fingerprinting risks until it’s figured out how to deal with existing ones.
The second problem is less easily explained away: the technology will share new personal data with trackers who can already identify users. For FLoC to be useful to advertisers, a user’s cohort will necessarily reveal information about their behavior.
The project’s Github page addresses this up front:
This API democratizes access to some information about an individual’s general browsing history (and thus, general interests) to any site that opts into it. … Sites that know a person’s PII (e.g., when people sign in using their email address) could record and reveal their cohort. This means that information about an individual’s interests may eventually become public.
As described above, FLoC cohorts shouldn’t work as identifiers by themselves. However, any company able to identify a user in other ways—say, by offering “log in with Google” services to sites around the Internet—will be able to tie the information it learns from FLoC to the user’s profile.
Two categories of information may be exposed in this way:
Specific information about browsing history. Trackers may be able to reverse-engineer the cohort-assignment algorithm to determine that any user who belongs to a specific cohort probably or definitely visited specific sites.
General information about demographics or interests. Observers may learn that in general, members of a specific cohort are substantially likely to be a specific type of person. For example, a particular cohort may over-represent users who are young, female, and Black; another cohort, middle-aged Republican voters; a third, LGBTQ+ youth.
This means every site you visit will have a good idea about what kind of person you are on first contact, without having to do the work of tracking you across the web. Moreover, as your FLoC cohort will update over time, sites that can identify you in other ways will also be able to track how your browsing changes. Remember, a FLoC cohort is nothing more, and nothing less, than a summary of your recent browsing activity.
You should have a right to present different aspects of your identity in different contexts. If you visit a site for medical information, you might trust it with information about your health, but there’s no reason it needs to know what your politics are. Likewise, if you visit a retail website, it shouldn’t need to know whether you’ve recently read up on treatment for depression. FLoC erodes this separation of contexts, and instead presents the same behavioral summary to everyone you interact with.
FLoC is designed to prevent a very specific threat: the kind of individualized profiling that is enabled by cross-context identifiers today. The goal of FLoC and other proposals is to avoid letting trackers access specific pieces of information that they can tie to specific people. As we’ve shown, FLoC may actually help trackers in many contexts. But even if Google is able to iterate on its design and prevent these risks, the harms of targeted advertising are not limited to violations of privacy. FLoC’s core objective is at odds with other civil liberties.
The power to target is the power to discriminate. By definition, targeted ads allow advertisers to reach some kinds of people while excluding others. A targeting system may be used to decide who gets to see job postings or loan offers just as easily as it is to advertise shoes.
Over the years, the machinery of targeted advertising has frequently been used for exploitation, discrimination, and harm. The ability to target people based on ethnicity, religion, gender, age, or ability allows discriminatory ads for jobs, housing, and credit. Targeting based on credit history—or characteristics systematically associated with it— enables predatory ads for high-interest loans. Targeting based on demographics, location, and political affiliation helps purveyors of politically motivated disinformation and voter suppression. All kinds of behavioral targeting increase the risk of convincing scams.
Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads.
Google, Facebook, and many other ad platforms already try to rein in certain uses of their targeting platforms. Google, for example, limits advertisers’ ability to target people in “sensitive interest categories.” However, these efforts frequently fall short; determined actors can usually find workarounds to platform-wide restrictions on certain kinds of targeting or certain kinds of ads.
Even with absolute power over what information can be used to target whom, platforms are too often unable to prevent abuse of their technology. But FLoC will use an unsupervised algorithm to create its clusters. That means that nobody will have direct control over how people are grouped together. Ideally (for advertisers), FLoC will create groups that have meaningful behaviors and interests in common. But online behavior is linked to all kinds of sensitive characteristics—demographicslike gender, ethnicity, age, and income; “big 5” personality traits; even mental health. It is highly likely that FLoC will group users along some of these axes as well. FLoC groupings may also directly reflect visits to websites related to substance abuse, financial hardship, or support for survivors of trauma.
Google has proposed that it can monitor the outputs of the system to check for any correlations with its sensitive categories. If it finds that a particular cohort is too closely related to a particular protected group, the administrative server can choose new parameters for the algorithm and tell users’ browsers to group themselves again.
This solution sounds both orwellian and sisyphean. In order to monitor how FLoC groups correlate with sensitive categories, Google will need to run massive audits using data about users’ race, gender, religion, age, health, and financial status. Whenever it finds a cohort that correlates too strongly along any of those axes, it will have to reconfigure the whole algorithm and try again, hoping that no other “sensitive categories” are implicated in the new version. This is a much more difficult version of the problem it is already trying, and frequently failing, to solve.
In a world with FLoC, it may be more difficult to target users directlybased on age, gender, or income. But it won’t be impossible. Trackers with access to auxiliary information about users will be able to learn what FLoC groupings “mean”—what kinds of people they contain—through observation and experiment. Those who are determined to do so will still be able to discriminate. Moreover, this kind of behavior will be harder for platforms to police than it already is. Advertisers with bad intentions will have plausible deniability—after all, they aren’t directly targeting protected categories, they’re just reaching people based on behavior. And the whole system will be more opaque to users and regulators.
Google, please don’t do this
We wrote about FLoC and the other initial batch of proposals when they were first introduced, calling FLoC “the opposite of privacy-preserving technology.” We hoped that the standards process would shed light on FLoC’s fundamental flaws, causing Google to reconsider pushing it forward. Indeed, several issues on the official Github page raise the exactsameconcerns that we highlight here. However, Google has continued developing the system, leaving the fundamentals nearly unchanged. It has started pitching FLoC to advertisers, boasting that FLoC is a “95% effective” replacement for cookie-based targeting. And starting with Chrome 89, released on March 2, it’s deploying the technology for a trial run. A small portion of Chrome users—still likely millions of people—will be (or have been) assigned to test the new technology.
Make no mistake, if Google does through on its plan to implement FLoC in Chrome, it will likely give everyone involved “options.” The system will probably be opt-in for the advertisers that will benefit from it, and opt-out for the users who stand to be hurt. Google will surely tout this as a step forward for “transparency and user control,” knowing full well that the vast majority of its users will not understand how FLoC works, and that very few will go out of their way to turn it off. It will pat itself on the back for ushering in a new, private era on the Web, free of the evil third-party cookie—the technology that Google helped extend well past its shelf life, making billions of dollars in the process.
It doesn’t have to be that way. The most important parts of the privacy sandbox, like dropping third-party identifiers and fighting fingerprinting, will genuinely change the Web for the better. Google can choose to dismantle the old scaffolding for surveillance without replacing it with something new and uniquely harmful.
We emphatically reject the future of FLoC. That is not the world we want, nor the one users deserve. Google needs to learn the correct lessons from the era of third-party tracking and design its browser to work for users, not for advertisers.
Note: We reached out to Google to verify certain facts presented in this post, as well as to request more information about the upcoming Origin Trial. We have not received a response at the time of posting.
#AceNewsReport – Jan.04: Snowden: “We Can Fix a Broken System: “Below is a message from whistleblower Edward Snowden. His revelations about secret surveillance programs opened the world’s eyes to a new level of government misconduct, and reinvigorated EFF’s continuing work in the courts and with lawmakers to end unlawful mass spying.
EFF is grateful to Ed for his support in our court cases, and to people like you for sustaining EFF during our Year-End Challenge membership drive. Your help is essential to pushing back the tide of unchecked surveillance.___________________________ Seven years ago I did something that would change my life and alter the world’s relationship to surveillance forever. When journalists revealed the truth about state deception and illegal conduct against citizens, it was human rights and civil liberties groups like EFF—backed by people around the world just like you—that seized the opportunity to hold authority to account
Surveillance quiets resistance and takes away our choices. It robs us of private space, eroding our dignity and the things that make us human. When you’re secure from the spectre of judgement, you have room to think, to feel, and to make mistakes as your authentic self. That’s where you test your notions of what’s right. That’s when you question the things that are wrong. By sounding the alarm and shining a light on mass surveillance, we force governments around the world to confront their wrongdoing.
Slowly, but surely, grassroots work is changing the future. Laws like the USA Freedom Act have just begun to rein in excesses of government surveillance. Network operators and engineers are triumphantly “encrypting all the things” to harden the Internet against spying. Policymakers began holding digital privacy up to the light of human rights law. And we’re all beginning to understand the power of our voices online. This is how we can fix a broken system. But it only works with your help. For 30 years, EFF members have joined forces to ensure that technology supports freedom, justice, and innovation for all people. It takes unique expertise in the courts, with policymakers, and on technology to fight digital authoritarianism, and thankfully EFF brings all of those skills to the fight. EFF relies on participation from you to keep pushing the digital rights movement forward. Each of us plays a crucial role in advancing democracy for ourselves, our neighbors, and our children. I hope you’ll answer the call by joining EFF to build a better digital future together.
#AceNewsReport – Nov.25: In a report disclosing its involvement in the investigation, security firm Group-IB said the three suspects are members of a cybercrime group they have been tracking since 2019 and which they have been tracking under the codename of TMT. Group-IB said the group primarily operated by sending out mass email spam campaigns containing files laced with malware:
To send their email spam, the group used the Gammadyne Mailer and Turbo-Mailer email automation tools and then relied on MailChimp to track if a recipient victim opened their messages:
The file attachments were laced with various strains of malware that granted hackers access to infected computers from where they focused on stealing credentials from browsers, email, and FTP clients:
#AceNewsDesk report …………………Published on November 25, 2020 at 06:45PM
#AceNewsReport – May.04: During the face of a global pandemic, there is an urgent need for reporting relating to the spread of the #coronavirus and how governments are responding. But it is in times of crisis that the civil liberties we value most are put to the test—and that is exactly what is happening now as governments around the world clamp down on journalism and stifle the free flow of critical information:
With so little currently known about the novel coronavirus, governments around the world have seized the opportunity to control the narrative around the virus and their responses to it. In countries including Algeria, Azerbaijan, China, Hungary, Indonesia, Iran, Palestine, Russia, South Africa, Thailand, and more, authorities have banned individuals and journalists from sharing false or misleading information about the coronavirus.
Criminalizing “false information,” however, gives the party in control of law enforcement the power to define what information is “true” or “correct.” And such laws also give the government and the power to censor, detain, arrest, and prosecute those who share information that doesn’t align with the official state narrative.
This is already happening. In Cambodia, police have arrested at least 17 people for spreading “false information” about coronavirus—including four members of the opposition political party, all of whom remain in detention, and a teenage girl expressing fears on social media about the rumored spread of the virus at her school.
In Turkey, authorities have detained people for making “unfounded” postings on social media criticizing the Turkish government’s response to the pandemic and suggesting that the coronavirus was spreading widely in the country—even though, according to independent reporting, this is exactly the case.
Police in Indian-administered Kashmir have detained journalists and threatened them with prosecution. The detained journalists had posted on social media about coronavirus, and about government censorship and militancy in Kashmir.
Even Puerto Rico, a United States territory that—like the fifty states—is bound by the free speech protections enshrined in the Constitution, has enacted a plainly unconstitutional law prohibiting, in certain circumstances, the spread of some types of “false information” related to the government’s response to the virus.
But as the world battles a novel and little-understood virus threatening lives and livelihoods around the globe, ensuring the free flow of information is more important now than ever. Who knows how the course of the virus could have been different if China had not silenced Wuhan Doctor Li Wenliang when he sought to sound the alarm about the new coronavirus during its earliest days, instead of silencing him with accusations of spreading false rumors?
By embracing China’s approach, governments are choosing to censor, instead of foster, reporting about how the crisis is unfolding. The threat of interrogation, detention, and arrest chills journalists, political activists, and individuals from sharing their experiences, investigating official actions, or challenging the government’s narrative.
To be sure, governments play a critical role in battling the global pandemic—and that includes by acting as sources of important information. But that does not mean that governments should anoint themselves the sole arbiters of truth and falsity, and strip individuals’ rights to investigate the government’s claims, question the official narrative, and share their research, observations, or experiences.
After all, the very premise of “false news” laws—that there always exists an identifiable, objective “truth”—is often hollow. Particularly in this quickly evolving crisis, even the most well-intentioned parties’ understandings of the virus are changing rapidly. Only two months ago, the U.S. government was stating that face masks were not effective and instructing people not to wear them—but today, the opposite guidance is in effect (and has been in some other countries for some time now).
General prohibitions on the dissemination of information based on vague and ambiguous ideas, including “false news” or “non-objective information,” are incompatible with international standards for restrictions on freedom of expression . . . and should be abolished.
And in a new report on COVID-19 and freedom of expression, David Kaye, the UN Special Rapporteur on Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, acknowledged the harms that mis- and disinformation poses in a pandemic, noted the elusiveness of any singular definition of disinformation, and re-emphasized the importance of countering untruths. But the Special Rapporteur warned against laws aimed at punishing false information, cautioning:
Measures to combat disinformation must never prevent journalists and media actors from carrying out their work or lead to content being unduly blocked on the Internet. . . . Vague prohibitions of disinformation effectively empower government officials with the ability to determine the truthfulness or falsity of content in the public and political domain.
As we observe World Press Freedom Day and celebrate the work of the press to hold governments accountable, we must also protect the ability of journalists, activists, and citizens to speak out without fear that they will be arrested or imprisoned for the information they share.
#AceNewsServices – Update: CHINA – Dec.06 – Zhou Yongkang, China’s former security chief and Politburo member, has been stripped of his Communist Party membership and will face prosecution, Xinhua reported early on Saturday, citing the Politburo of the party’s Central Committee.
Zhou, 72, was accused of a series of serious violations of “party and organisational discipline and secrecy”, ranging from taking bribes to leaking party and state secrets and “exchanging power and money for sex”.
The report said he took bribes and abused his post for the benefit of others, including mistresses, relatives and friends.
He was also accused of causing heavy losses of state-owned assets.
The announcement was the first time that allegations that Zhou leaked state secrets had been reported.
The decision on Friday to expel Zhou and hand his case over to prosecutors came after the Central Commission for Discipline Inspection carried out an investigation into Zhou earlier this year, Xinhua reported.
The parliament of Afghanistan approved an agreement that allows troops of the NATO-led coalition to say in the country beyond 2014. The agreement was suspended in mid-air for months, as former President Hamid Karzai refused to sign it during his term.
The new agreement, ratified Sunday, allows the ISAF to maintain a total of 12,000 troops in Afghanistan next year. After a 152-5 vote, Nazifullah Salarzai, spokesman for Afghan President Ashraf Ghani, said the foreign troops will “train, advise and assist Afghan security forces.”
“Afghan forces are responsible for the security and defense of the Afghan people, and in the fight against international terrorism and training of our national security forces we count on the support and assistance of our international partners,” he said, AP reported.
U.S. Defence Secretary Chuck Hagel has ordered an overhaul of the U.S. nuclear program due to systematic problems.
At the height of the cold war between the United States and Soviet Union, the U.S. nuclear weapons program was routinely on high alert, and drew a great deal of attention, both in terms of manpower and military budget.
But once the Berlin Wall fell, the U.S. began focusing its assets elsewhere and the nuclear program was hit by security lapses, a lack of discipline, and lousy morale.
Hagel said he ordered an audit of the United States Nuclear program, and a lack of “investment and support” over “far too many years” has led to a myriad of problems that should be addressed immediately.
“The Review found evidence of systematic problems if not addressed could undermine the safety, security and effectiveness of elements of the force in the future,” Hagel said.
Hagel said the nuclear program is safe for now, but without attention he could not guarantee the weapons program would remain secure and effective.
He is asking for billions in funding, a 10 percent increase for the nuclear weapons programs over each of the next five years.
#AceNewsServices NORTH AMERICA: On October 02 a new CFR-sponsored Independent Task Force report, North America: Time for a New Focus, asserts that elevating and prioritizing the Canada-Mexico-U.S. relationship offers the best opportunity for strengthening the United States and its place in the world.
“It is time to put North America at the forefront of U.S. policy,” the report says. “The development and implementation of a strategy for U.S. economic, energy, security, environmental, and societal cooperation with its two neighbours can strengthen the United States at home and enhance its influence abroad.”
The Task Force proposes a comprehensive set of recommendations for deepening North American integration, concentrating on four pivotal areas—energy, economic competitiveness, security, and community. These include:
Capitalizing on North America’s promising energy outlook. The North American countries need a regional energy strategy to strengthen the continent’s energy infrastructure, expand energy exports, support Mexico’s historic reforms, improve safety, and encourage harmonized policies to promote energy conservation and reduce carbon emissions.
“For economic, environmental, and diplomatic reasons, the Task Force recommends that the U.S. government encourage increased energy connections with Canada and Mexico. The U.S. government should approve additional pipeline capacity, including the Keystone XL pipeline,” the report says. “The Task Force also proposes that the United States end restrictions on energy exports, including oil and LNG (liquefied natural gas).”
Bolstering economic competitiveness through the freer movement of goods and services across borders. Upgrading infrastructure and policies across borders would interconnect national economies securely and efficiently. Recognizing trilateral economic interests, the United States should also include Canada and Mexico in its negotiations for the Transatlantic Trade and Investment Partnership (TTIP) and other free trade agreements.
“The United States’ ability to compete in a dynamic and competitive world economy would be strengthened by enhanced economic ties with Canada and Mexico,” the report explains. “The Task Force recommends working toward the free and unimpeded movement of goods and services across North America’s common borders.”
Strengthening security through a unified continental strategy and “continuous border innovation.” While working toward the goal of a unified security strategy for North America, the United States and Canada should support Mexican efforts to strengthen the democratic rule of law, dismantle criminal networks, contribute to the development of resilient and cohesive communities, and reduce arms smuggling and drug consumption.
“The United States should shift from border-centric security toward a strategy of combining perimeter protection with security in depth through the use of intelligence, risk assessment, shared capabilities, and joint actions throughout the region,” the report says.
Fostering a North American community through comprehensive immigration reform, workforce development, and the creation of a mobility accord to facilitate the movement of workers. The U.S. Congress should pass comprehensive immigration reforms. To better aid the movement of North American workers, the three countries should also create a North American Mobility Accord, expand visas for skilled workers, streamline recognition of professional credentials, and develop a regional educational innovation strategy.
“The Task Force strongly recommends the passage of comprehensive federal immigration reform that secures U.S. borders, prevents illegal entry, provides visas on the basis of economic need, invites talented and skilled people to settle in the United States, and offers a pathway to legalization for undocumented immigrants now in the United States,” the report says.
Chaired by David H. Petraeus, retired U.S. Army general and chairman of the KKR Global Institute, and Robert B. Zoellick, former president of the World Bank Group and chairman of Goldman Sachs’s International Advisors, the Task Force is composed of a diverse and distinguished group of experts that includes former government officials, scholars, and others. The project is directed by CFR Senior Fellow for Latin America StudiesShannon K. O’Neil.
#AceNewsServices – BRITAIN (London) – October 05 – Former MI6 chief John Scarlett has warned British parents that sexual predators could employ location pin-pointing mobile and internet devices to target their children.
‘ Sexual Predators could track Children with GPS on Smartphones and Tablets ‘
Joint Intelligence Unit chairman during the Iraq war, who stepped down as head of Britain’s intelligence services in 2009, Sir John Scarlett said parents must be more vigilant regarding their children’s use of tablet devices and smart phones.
He said vigilance was needed as youngsters could have their devices hacked by pedophiles looking to track their movements.
The former spy chief’s comments come as GPS technology integrated into tablet computers and mobile phones proliferates, enabling users to find their devices if misplaced, access directions via Google Maps, and pin-point businesses or services close to their geo-location.
Speaking at the annual meeting of the Headmasters’ and Headmistresses’ Conference (HMC) in Wales, Scarlett warned that young children were particularly vulnerable to such predators. He also said teenagers, who had grown up with sophisticated technology, were especially “relaxed” about threats.
There is a clear “generational divide” characterizing internet usage, with children and teenagers being notably less cautious about the personal information they disclose online, he said.
Scarlett claimed the abuse of tracking tools, and the disclosure of children’s data on the web, jeopardizes security to a greater extent than state surveillance. He added parents often show complacency regarding their children’s online activity.
He warned that certain location devices were so advanced they could reveal whether an individual was in a bathroom – as well as intimate physiological details about them such as their heartbeat.
“You’ve got to know what your children are doing,” Scarlett told the conference gathering. He acknowledged, however, that this was not an easy task.
“Personally what worries me most are the tracking devices,” he said.
Another step to “Police State America” here, links below and video, this isn’t good for you guys in the USA, having an Internet Licence, so you are logged on mobile phones, tablets. laptops, PC’s and any other device. Why do this?
A few years back, the White House had a brilliant idea: Why not create a single, secure online ID that Americans could use to verify their identity across multiple websites, starting with local government services. The New York Times described it at the time as a “driver’s license for the internet.”
Sound convenient? It is. Sound scary? It is.
237996-privacy-bill-of-rightsNext month, a pilot program of the “National Strategy for Trusted Identities in Cyberspace” will begin in government agencies in two US states, to test out whether the pros of a federally verified cyber ID outweigh the cons.
The goal is to put to bed once and for all our current ineffective and tedious system of using passwords for online authentication, which itself was a cure for the even more ineffective and tedious process of walking into a brick-and-mortar building and presenting a human being with two forms of paper identification.
The rub is that online identity verification is heaps more convenient for citizens and cost-effective for government agencies, but it’s also fraught with insecurities; federal and state governments lose billions of dollars a year to fraud, and that trickles down to taxpayers.
Meanwhile, the technology for more secure next-gen authentication exists, developed by various tech firms in the public sector, but security groups have had a hell of a time implementing any of them on a broad scale. Enter the government, which proposed the national ID strategy to help standardize the process using a plan called the “identity ecosystem.”
The vision is to use a system that works similarly to how we conduct the most sensitive forms of online transactions, like applying for a mortgage. It will utilize two-step authentication, say, some combination of an encrypted chip in your phone, a biometric ID, and question about the name of your first cat.
But instead of going through a different combination of steps for each agency website, the same process and ID token would work across all government services: from food stamps and welfare to registering for a fishing license.
The original proposal was quick to point out that this isn’t a federally mandated national ID. But if successful, it could pave the way for an interoperable authentication protocol that works for any website, from your Facebook account to your health insurance company.
There’s no doubt secure online identification is a problem overdue for a solution, but creating a system that would work like an all-access token for the internet is a scary can of worms to open.
To start, there’s the privacy issue. Unsurprisingly, the Electronic Frontier Foundation immediately pointed out the red flags, arguing that the right to anonymous speech in the digital realm is protected under the First Amendment. It called the program “radical,” “concerning,” and pointed out that the plan “makes scant mention of the unprecedented threat such a scheme would pose to privacy and free speech online.”
And the keepers of the identity credentials wouldn’t be the government itself, but a third party organization. When the program was introduced in 2011, banks, technology companies or cellphone service providers were suggested for the role, so theoretically Google or Verizon could have access to a comprehensive profile of who you are that’s shared with every site you visit, as mandated by the government.
Post-NSA revelations, we have a good sense for the dystopian Big Brother society the EFF is worried about. As the organization told the Times, at the least “we would need new privacy laws or regulations to prohibit identity verifiers from selling user data or sharing it with law enforcement officials without a warrant.”
Then there’s the problem of putting all your security eggs in one vulnerable basket. If a hacker gets their hands on your cyber ID, they have the keys to everything.
For now, this is all just speculation. The program is just entering a test phase with select state government agencies only (there are currently plans to expand the trial out to 10 more organizations.)
But it’s not far-fetched to think we’re moving toward a standardized way to prove our identity in cyberspace the same way we do offline.
The White House argues cutting down on inefficiencies and fraud would bolster the information economy. In an era where we have cars that drive themselves and flying robots delivering beer, you have to wonder how much longer people are going to put up with standing in line at the DMV for four hours to hand a teller (with a taxpayer-paid salary) a copy of your birth certificate and piece of mail to prove you are you.
If an analysis of the pilot programs in Michigan and Pennsylvania find the centralized ID saves time and money and spares us the DMV line, privacy advocates are going to have a hell of a fight ahead of them.
#AceWorldNews – DAMASCUS – April 14 – (SANA) – Army units continued their operations in several areas across the country on Monday, eliminating a number of terrorists and regaining control of several areas in Damascus Countryside and Lattakia.
Army regains control of Maaloula and al-Sarkha towns in Damascus Countryside
Army units on Monday restored security and stability to al-Sarkha area in al-Qalamoon in Damascus Countryside, a military source told SANA.
The source added that the army units regained control over the areas surrounding the town after eliminating large numbers of terrorists.
Army units also restored security and stability to Maaloula town, pursuing terrorists in the area surrounding the town, eliminating a number of them, and dismantling the mines and explosives planted by the terrorists in the town.
Other units advanced in the town of Jabaadin in al-Qalamoun area, eliminating terrorist gatherings in it.
#AceWorldNews – CAIRO – April 05 – Senior security source from the Interior Ministry has said that the helicopter transferring deposed President Mohamed Morsi arrived at Police Academy for trial, along with others, over killing protesters.
Morsy was transferred as usual from the runway to the courtroom under high security measures, the source told the state-run news agency MENA.
Security source earlier said on Saturday that the rest of suspects involved in the lawsuit arrived at the academy after being transferred from Tora Prison under tight security.
#AceWorldNews – TASHKENT – March 28 – Member-states of the Shanghai Cooperation Organisation (SCO) will ensure security and counter the threat of terrorism after the NATO-led coalition leaves Afghanistan,
Russia’s Federal Security Service deputy director Sergei Smirnov has said- Tass.
“There is a danger that the withdrawal of coalition troops from Afghanistan will weaken the regime, including counterintelligence.
Terrorist groups and terrorists have to penetrate into SCO countries,” Sergei Smirnov said on Friday.
“We’re planning to monitor the situation in the border area and inside Afghanistan,” he said – Tass