EXCLUSIVE REPORT: Future of Work Summit Report: Responsible AI will give a competitive advantage as long as it is implemented in a safe & ethical manner #AceNewsDesk report

#AceNewsReport – Jan.20: Responsible AI is a governance framework that covers ethical, legal, safety, privacy, and accountability concerns.

#AceDailyNews says according to a Venture Beat News Report: There is little doubt that AI is changing the business landscape and providing competitive advantages to those that embrace it. It is time, however, to move beyond the simple implementation of AI and to ensure that AI is being done in a safe and ethical manner. This is called responsible AI and will serve not only as a protection against negative consequences, but also as a competitive advantage in and of itself By

algorithms

Image Credit: aislan13/Getty Images

Did you miss a session from the Future of Work Summit? Head over to our Future of Work Summit on-demand library to stream.


What is responsible AI?

Responsible AI is a governance framework that covers ethical, legal, safety, privacy, and accountability concerns. Although the implementation of responsible AI varies by company, the necessity of it is clear. Without responsible AI practices in place, a company is exposed to serious financial, reputational, and legal risks. On the positive side, responsible AI practices are becoming prerequisites to even bidding on certain contracts, especially when governments are involved; a well-executed strategy will greatly help in winning those bids. Additionally, embracing responsible AI can contribute to a reputational gain to the company overall.

Values by design

Much of the problem implementing responsible AI comes down to foresight. This foresight is the ability  to predict what ethical or legal issues an AI system could have during its development and deployment  lifecycle. Right now, most of the responsible AI considerations happen after an AI product is  developed — a very ineffective way to implement AI. If you want to protect your company from financial,  legal, and reputational risk, you have to start projects with responsible AI in mind. Your company needs  to have values by design, not by whatever you happen to end up with at the end of a project.

Implementing values by design

Responsible AI covers a large number of values that need to be prioritized by company leadership. While  covering all areas is important in any responsible AI plan, the amount of effort your company expends in  each value is up to company leaders. There has to be a balance between checking for responsible AI  and actually implementing AI. If you expend too much effort on responsible AI, your effectiveness may  suffer. On the other hand, ignoring responsible AI is being reckless with company resources. The best  way to combat this trade off is starting off with a thorough analysis at the onset of the project, and not  as an after-the-fact effort.

Best practice is to establish a responsible AI committee to review your AI projects before they  start, periodically during the projects, and upon completion. The purpose of this committee is to evaluate the project against responsible AI values and approve, disapprove, or disapprove with actions to bring the project in compliance. This can include requesting more information be obtained or things that need to be changed fundamentally. Like an Institutional Review Board that is used to monitor ethics in biomedical research, this committee should contain both experts in AI and non-technical  members. The non-technical members can come from any background and serve as a reality check on the AI experts. AI experts, on the other hand, may better understand the difficulties and remediations possible but can become too used to institutional and industry norms that may not be sensitive enough  to concerns of the greater community. This committee should be convened at the onset of the project,  periodically during the project, and at the end of the project for final approval.

What values should the Responsible AI Committee consider?

Values to focus on should be considered by the business to fit within its overall mission statement.  Your business will likely choose specific values to emphasize, but all major areas of concern should be  covered. There are many frameworks you can choose to use for inspiration such as Google’s and Facebook’s. For this article, however, we will  be basing the discussion on the recommendations set forth by the High-Level Expert Group on Artificial  Intelligence Set Up by The European Commission in The Assessment List for Trustworthy Artificial  Intelligence. These recommendations include seven areas. We will explore each area and suggest  questions to be asked regarding each area.

1. Human agency and oversight

AI projects should respect human agency and decision making. This principle involves how the AI  project will influence or support humans in the decision making process. It also involves how the  subjects of AI will be made aware of the AI and put trust in its outcomes. Some questions that need to  be asked include:

  • Are users made aware that a decision or outcome is the result of an AI project?
  • Is there any detection and response mechanism to monitor adverse effects of the AI project?

2. Technical robustness and safety

Technical robustness and safety require that AI projects preemptively address concerns around risks associated with the AI performing unreliably and minimize the impact of such. The results of the AI project should include the ability of the AI to perform predictably and consistently, and it should cover the need of the AI to be protected from cybersecurity concerns. Some questions that need to be asked  include:

  • Has the AI system been tested by cybersecurity experts?
  • Is there a monitoring process to measure and access risks associated with the AI project?

3. Privacy and data governance

AI should protect individual and group privacy, both in its inputs and its outputs. The algorithm should not include data that was gathered in a way that violates privacy, and it should not give results that violate the privacy of the subjects, even when bad actors are trying to force such errors. In order to do this effectively, data governance must also be a concern. Appropriate questions to ask include:

  • Does any of the training or inference data use protected personal data?
  • Can the results of this AI project be crossed with external data in a way that would violate an  individual’s privacy?

4. Transparency

Transparency covers concerns about traceability in individual results and overall explainability of AI algorithms. The traceability allows the user to understand why an individual decision was made.  Explainability refers to the user being able to understand the basics of the algorithm that was used to  make the decision. It also refers to the ability of the user to understand what factors where involved in  the decision making process for their specific prediction. Questions to ask are:

  • Do you monitor and record the quality of the input data?
  • Can a user receive feedback as to how a certain decision was made and what they could do to  change that decision?

5. Diversity, non-discrimination

In order to be considered responsible AI, the AI project must work for all subgroups of people as well as possible. While AI bias can rarely be eliminated entirely, it can be effectively managed. This mitigation can take place during the data collection process — to include a more diverse background of people in the training dataset — and can also be used at inference time to help balance accuracy between different  groupings of people. Common questions include:

  • Did you balance your training dataset as much as possible to include various subgroups of people?
  • Do you define fairness and then quantitatively evaluate the results?

6. Societal and environmental well-being

An AI project should be evaluated in terms of its impact on the subjects and users along with its impact on the environment. Social norms such as democratic decision making, upholding values, and preventing addiction to AI projects should be upheld. Furthermore the results of the decisions of the AI project on the environment should be considered where applicable.  One factor applicable in nearly all cases is an evaluation of the amount of energy needed to train the required models. Questions that can be asked:

  • Did you assess the project’s impact on its users and subjects as well as other stakeholders?
  • How much energy is required to train the model and how much does that contribute to carbon emissions?

7. Accountability

Some person or organization needs to be responsible for the actions and decisions made by the AI  project or encountered during development. There should be a system to ensure adequate possibility of  redress in cases where detrimental decisions are made. There should also be some time and attention paid to risk management and mitigation. Appropriate questions include:

  • Can the AI system be audited by third parties for risk?
  • What are the major risks associated with the AI project and how can they be mitigated?

The bottom line

The seven values of responsible AI outlined above provide a starting point for an organization’s responsible AI initiative. Organizations who choose that pursue responsible AI will find they increasingly have access to more opportunities — such as bidding on government contracts. Organizations that don’t implement these practices expose themselves to legal, ethical, and reputational risks.

David Ellison is Senior AI Data Scientist at Lenovo.

#AceNewsDesk report ………….Published: Jan.20: 2022:

Editor says …Sterling Publishing & Media Service Agency is not responsible for the content of external site or from any reports, posts or links, and can also be found here on Telegram: https://t.me/acenewsdaily all of our posts fromTwitter can be found here: https://acetwitternews.wordpress.com/ and all wordpress and live posts and links here: https://acenewsroom.wordpress.com/and thanks for following as always appreciate every like, reblog or retweet and free help and guidance tips on your PC software or need help & guidance from our experts AcePCHelp.WordPress.Com

#ai, #technolgy, #work

(BEIJING) JUST IN: China has unveiled its latest military technological achievement – fighter jets controlled by artificial intelligence (AI) #AceNewsDesk report

#AceNewsReport – June.20: In simulated test battles, the AI-controlled jets were able to shoot down real human pilots according to the Chinese military’s official newspaper, PLA Daily. Brigade commander Du Jianfeng, told the publication that the AI jets were becoming more integrated into pilot training.

BEIJING: China unleash fighter jets capable of shooting down real pilots – ‘Better than humans’ using Artificial Intelligence acorrding to Daily Express

By China News: June. 17, 2021

China unleashes AI controlled jets

Mr Du said the aircraft were capable of making “flawless tactical decisions”.

He also praised the AI’s “skill at handling the aircraft.”

Mr Du continued to say that the newly developed system was a powerful aid for “sharpening the sword” of Chinese pilots.

One of the most remarkable aspects of the new system is the ability to copy and instantly master new tactics and combat skills.

Chinese jets

Chinese jets fly in formation (Image: Getty Images)

Military experts have proposed integration of the AI system into modern warfare could help pilots make decisions by taking an entire battlefield into consideration.

Fang Guoyu, one of the pilots tested against the system was “shot down” after the AI used his own technique against him.

He said: “At first, it was not difficult to win against the AI.

“But by studying data, each engagement became a chance for it to improve.

AI simulation

Simulator cockpit for fighting AI jets (Image: Getty Images)

“The move with which you defeated it today will be in its hands tomorrow.”

Mr Fang added the system excelled at “learning, assimilating, reviewing, and researching”.

According to reports last year, an AI system in the US also defeated an American fighter pilot in a simulated aerial battle.

The simulated test, conducted by the Defence Advanced Research Projects Agency at the Pentagon, was able to defeat at least seven teams.

US military

US military have also developed AI jets (Image: Getty Images)

A statement from the defence agency said: “In a future air domain contested by adversaries, a single human pilot can increase lethality by effectively orchestrating multiple autonomous unmanned platforms from within a manned aircraft.

“This shifts the human role from single platform operator to mission commander.”

The latest news of China’s AI-controlled jets follows an incursion into Taiwan airspace by the People’s Liberation Army on Tuesday.

The incursion was the largest to date, with at least 28 warplanes involved in the operation.

Jet simulator

Simulator of fighter jet (Image: Getty Images)

#AceNewsDesk report ………Published: Jun.20: 2021:

Editor says #AceNewsDesk reports by https://t.me/acenewsdaily and all our posts, also links can be found at here for Twitter and Live Feeds https://acenewsroom.wordpress.com/ and thanks for following as always appreciate every like, reblog or retweet and free help and guidance tips on your PC software or need help & guidance from our experts AcePCHelp.WordPress.Com

#ai, #beijing, #china

(BEIJING) JUST IN: A camera system that uses AI and facial recognition intended to reveal states of emotion has been tested on Uyghurs in Xinjiang, the BBC has been told #AceNewsDesk report

#AceNewsReport – May.27: The Chinese embassy in London has not responded directly to the claims but says political and social rights in all ethnic groups are guaranteed:

CHINA: ‘AI emotion-detection software tested on Uyghurs: A software engineer claimed to have installed such systems in police stations in the province a human rights advocate who was shown the evidence described it as shocking’

9 hours ago

By Jane Wakefield
Technology reporter 

A gate of what is officially known as a "vocational skills education centre" in Xinjiang
A gate of what is officially known as a “vocational skills education centre” in Xinjiang

Xinjiang is home to 12 million ethnic minority Uyghurs, most of whom are Muslim.

Citizens in the province are under daily surveillance. The area is also home to highly controversial “re-education centres”, called high security detention camps by human rights groups, where it is estimated that more than a million people have been held. 

Beijing has always argued that surveillance is necessary in the region because it says separatists who want to set up their own state have killed hundreds of people in terror attacks.

Getty ImagesXinjiang is believed to be one of the most surveilled areas in the world

The software engineer agreed to talk to the BBC’s Panorama programme under condition of anonymity, because he fears for his safety. The company he worked for is also not being revealed. 

But he showed Panorama five photographs of Uyghur detainees who he claimed had had the emotion recognition system tested on them.Data from the system purports to indicate a person’s state of mind, with red suggesting a negative or anxious state of mind

“The Chinese government use Uyghurs as test subjects for various experiments just like rats are used in laboratories,” he said.

And he outlined his role in installing the cameras in police stations in the province: “We placed the emotion detection camera 3m from the subject. It is similar to a lie detector but far more advanced technology.”

He said officers used “restraint chairs” which are widely installed in police stations across China.

“Your wrists are locked in place by metal restraints, and [the] same applies to your ankles.”

He provided evidence of how the AI system is trained to detect and analyse even minute changes in facial expressions and skin pores.

According to his claims, the software creates a pie chart, with the red segment representing a negative or anxious state of mind.

He claimed the software was intended for “pre-judgement without any credible evidence”.

The Chinese embassy in London did not respond to questions about the use of emotional recognition software in the province but said: “The political, economic, and social rights and freedom of religious belief in all ethnic groups in Xinjiang are fully guaranteed.

“People live in harmony regardless of their ethnic backgrounds and enjoy a stable and peaceful life with no restriction to personal freedom.”

The evidence was shown to Sophie Richardson, China director of Human Rights Watch.

“It is shocking material. It’s not just that people are being reduced to a pie chart, it’s people who are in highly coercive circumstances, under enormous pressure, being understandably nervous and that’s taken as an indication of guilt, and I think, that’s deeply problematic.”

Suspicious behaviour

According to Darren Byler, from the University of Colorado, Uyghurs routinely have to provide DNA samples to local officials, undergo digital scans and most have to download a government phone app, which gathers data including contact lists and text messages.

“Uyghur life is now about generating data,” he said.

“Everyone knows that the smartphone is something you have to carry with you, and if you don’t carry it you can be detained, they know that you’re being tracked by it. And they feel like there’s no escape,” he said.

Most of the data is fed into a computer system called the Integrated Joint Operations Platform, which Human Rights Watch claims flags up supposedly suspicious behaviour.

“The system is gathering information about dozens of different kinds of perfectly legal behaviours including things like whether people were going out the back door instead of the front door, whether they were putting gas in a car that didn’t belong to them,” said Ms Richardson.

“Authorities now place QR codes outside the doors of people’s homes so that they can easily know who’s supposed to be there and who’s not.”

Orwellian?

There has long been debate about how closely tied Chinese technology firms are to the state. US-based research group IPVM claims to have uncovered evidence in patents filed by such companies that suggest facial recognition products were specifically designed to identify Uyghur people.

A patent filed in July 2018 by Huawei and the China Academy of Sciences describes a face recognition product that is capable of identifying people on the basis of their ethnicity.

Huawei said in response that it did “not condone the use of technology to discriminate or oppress members of any community” and that it was “independent of government” wherever it operated.

The group has also found a document which appears to suggest the firm was developing technology for a so-called One Person, One File system.

“For each person the government would store their personal information, their political activities, relationships… anything that might give you insight into how that person would behave and what kind of a threat they might pose,” said IPVM’s Conor Healy.

VCGHikvision makes a range of products including cameras

“It makes any kind of dissidence potentially impossible and creates true predictability for the government in the behaviour of their citizens. I don’t think that [George] Orwell would ever have imagined that a government could be capable of this kind of analysis.”

Huawei did not specifically address questions about its involvement in developing technology for the One Person, One File system but repeated that it was independent of government wherever it operated.

The Chinese embassy in London said it had “no knowledge” of these programmes.

IPVM also claimed to have found marketing material from Chinese firm Hikvision advertising a Uyghur-detecting AI camera, and a patent for software developed by Dahua, another tech giant, which could also identify Uyghurs.

Dahua said its patent referred to all 56 recognised ethnicities in China and did not deliberately target any one of them.

It added that it provided “products and services that aim to help keep people safe” and complied “with the laws and regulations of every market” in which it operates, including the UK.

Hikvision said the details on its website were incorrect and “uploaded online without appropriate review”, adding that it did not sell or have in its product range “a minority recognition function or analytics technology”.

Dr Lan Xue, chairman of China’s National committee on AI governance, said he was not aware of the patents.

“Outside China there are a lot of those sorts of charges. Many are not accurate and not true,” he told the BBC.

“I think that the Xinjiang local government had the responsibility to really protect the Xinjiang people… if technology is used in those contexts, that’s quite understandable,” he said.

The UK’s Chinese embassy had a more robust defence, telling the BBC: “There is no so-called facial recognition technology featuring Uyghur analytics whatsoever.”

Daily surveillance

Hu Liu feels his life is under constant surveillance

China is estimated to be home to half of the world’s almost 800 million surveillance cameras.

It also has a large number of smart cities, such as Chongqing, where AI is built into the foundations of the urban environment.

Chongqing-based investigative journalist Hu Liu told Panorama of his own experience: “Once you leave home and step into the lift, you are captured by a camera. There are cameras everywhere.”

“When I leave home to go somewhere, I call a taxi, the taxi company uploads the data to the government. I may then go to a cafe to meet a few friends and the authorities know my location through the camera in the cafe.

“There have been occasions when I have met some friends and soon after someone from the government contacts me. They warned me, ‘Don’t see that person, don’t do this and that.’

“With artificial intelligence we have nowhere to hide,” he said.

Find out more about this on Panorama’s Are you Scared Yet, Human? – available on iPlayer from 26 May

#AceNewsDesk report ……Published: May.27: 2021:

Editor says #AceNewsDesk reports by https://t.me/acenewsdaily and all our posts, also links can be found at here for Twitter and Live Feeds https://acenewsroom.wordpress.com/ and thanks for following as always appreciate every like, reblog or retweet and free help and guidance tips on your PC software or need help & guidance from our experts AcePCHelp.WordPress.Com

#ai, #beijing, #china, #facial-recognition, #london, #software