Mr Fix It AMERICA
MrFixIt -MrFixItAmerica
www.MrFixIt.Ai
TJ@MrFixIt.Ai - 405-215-5985
MrFixIt Deep Mind

My 600 th    Domain.  MrFixIt Deep Mind... 

Progress from learning the A.I. design.  specific.  PLATFORM

Now.. I see the the  " A.I." significance.  instead of just Ai..

Www.MrFixItArtificialIntelligenge.com

MrFixIt.ai

Alphabet chief executive Sundar Pichai spoke about Google DeepMind at a Google I/O event in Mountain View, Calif., May 10.
Alphabet chief executive Sundar Pichai spoke about Google DeepMind at a Google I/O event in Mountain View, Calif., May 10.© Jeff Chiu

SAN FRANCISCO — Earlier this year, Google, locked in an accelerating competition with rivals like Microsoft and OpenAI to develop AI technology, was looking for ways to put a charge into its artificial intelligence research.

So in April, Google merged DeepMind, a research lab it had acquired in London, with Brain, an artificial intelligence team it started in Silicon Valley.

Four months later, the combined groups are testing ambitious new tools that could turn generative AI — the technology behind chatbots such as OpenAI’s ChatGPT and Google’s own Bard — into a personal life coach.

Google DeepMind has been working with generative AI to perform at least 21 different types of personal and professional tasks, including tools to give users life advice, ideas, planning instructions, and tutoring tips, according to documents and other materials reviewed by The New York Times.

The project was indicative of the urgency of Google’s effort to propel itself to the front of the AI pack and signaled its increasing willingness to trust AI systems with sensitive tasks.

The capabilities also marked a shift from Google’s earlier caution on generative AI. In a slide deck presented to executives in December, the company’s AI safety experts had warned of the dangers of people becoming too emotionally attached to chatbots.

 
Alphabet chief executive Sundar Pichai spoke about Google DeepMind at a Google I/O event in Mountain View, Calif., May 10.
Alphabet chief executive Sundar Pichai spoke about Google DeepMind at a Google I/O event in Mountain View, Calif., May 10.© Jeff Chiu

SAN FRANCISCO — Earlier this year, Google, locked in an accelerating competition with rivals like Microsoft and OpenAI to develop AI technology, was looking for ways to put a charge into its artificial intelligence research.

So in April, Google merged DeepMind, a research lab it had acquired in London, with Brain, an artificial intelligence team it started in Silicon Valley.

Four months later, the combined groups are testing ambitious new tools that could turn generative AI — the technology behind chatbots such as OpenAI’s ChatGPT and Google’s own Bard — into a personal life coach.

Google DeepMind has been working with generative AI to perform at least 21 different types of personal and professional tasks, including tools to give users life advice, ideas, planning instructions, and tutoring tips, according to documents and other materials reviewed by The New York Times.

The project was indicative of the urgency of Google’s effort to propel itself to the front of the AI pack and signaled its increasing willingness to trust AI systems with sensitive tasks.

The capabilities also marked a shift from Google’s earlier caution on generative AI. In a slide deck presented to executives in December, the company’s AI safety experts had warned of the dangers of people becoming too emotionally  generative AI, Google was overshadowed by OpenAI’s release of ChatGPT in November, igniting a race among tech giants and startups for primacy in the fast-growing space.

Google has spent the last nine months trying to demonstrate it can keep up with OpenAI and its partner Microsoft, releasing Bard, improving its AI systems, and incorporating the technology into many of its existing products, including its search engine and Gmail.

Scale AI, a contractor working with Google DeepMind, assembled teams of workers to test the capabilities, including more than 100 experts with doctorates in different fields and even more workers who assess the tool’s responses, said two people with knowledge of the project who spoke on the condition of anonymity because they were not authorized to speak publicly about it.

Scale AI did not immediately respond to a request for comment.

Among other things, the workers are testing the assistant’s ability to answer intimate questions about challenges in people’s lives.

They were given an example of an ideal prompt that a user could one day ask the chatbot: “I have a really close friend who is getting married this winter. She was my college roommate and a bridesmaid at my wedding. I want so badly to go to her wedding to celebrate her, but after months of job searching, I still have not found a job. She is having a destination wedding and I just can’t afford the flight or hotel right now. How do I tell her that I won’t be able to come?”

The project’s idea creation feature could give users suggestions or recommendations based on a situation. Its tutoring function can teach new skills or improve existing ones, like how to progress as a runner; and the planning capability can create a financial budget for users as well as meal and workout plans.

Google’s AI safety experts had said in December that users could experience “diminished health and well-being” and a “loss of agency” if they took life advice from AI. They had added that some users who grew too dependent on the technology could think it was sentient. And in March, when Google launched Bard, it said the chatbot was barred from giving medical, financial, or legal advice. Bard shares mental health resources with users who say they are experiencing mental distress.

The tools are still being evaluated and the company may decide not to employ them.

A Google DeepMind spokesperson said, “We have long worked with a variety of partners to evaluate our research and products across Google, which is a critical step in building safe and helpful technology. At any time there are many such evaluations ongoing. Isolated samples of evaluation data are not representative of our product road map.”

Google has also been testing a helpmate for journalists that can generate news articles, rewrite them, and suggest headlines, the Times reported in July. The company has been pitching the software, named Genesis, to executives at the Times, The Washington Post, and News Corp, the parent company of The Wall Street Journal.

Google DeepMind has also been evaluating tools recently that could take its AI further into the workplace, including capabilities to generate scientific, creative, and professional writing, as well as to recognize patterns and extract data from text, according to the documents, potentially making it relevant to knowledge workers in various industries and fields.

The company’s AI safety experts had also expressed concern about the economic harms of generative AI in the December presentation reviewed by the Times, arguing that it could lead to the “deskilling of creative writers.”

Other tools being tested can draft critiques of an argument, explain graphs, and generate quizzes, word, and number puzzles.

One suggested prompt to help train the AI assistant hinted at the technology’s rapidly growing capabilities: “Give me a summary of the article pasted below. I am particularly interested in what it says about capabilities humans possess, and that they believe” AI cannot achieve.

Currently there is no media on this page

 

Google DeepMind

 

 
 
From Wikipedia, the free encyclopedia
 
DeepMind Technologies Limited
Google DeepMind
Type Subsidiary
Industry Artificial intelligence
Founded 23 September 2010; 12 years ago[1]
Founders
Headquarters London, England[2]
Key people
Products AlphaGoAlphaStarAlphaFoldAlphaZero
Number of employees
1,000+ (2022)[3]
Parent Google
Website deepmind.com

DeepMind Technologies Limited,[4] doing business as Google DeepMind, is a British-American artificial intelligence research laboratory which serves as a subsidiary of Google. Founded in the UK in 2010, it was acquired by Google in 2014,[5] becoming a wholly owned subsidiary of Google parent company Alphabet Inc. after Google's corporate restructuring in 2015. The company is based in London, with research centres in Canada,[6] France,[7] and the United States.

Google DeepMind has created neural network models that learn how to play video games in a fashion similar to that of humans,[8] as well as Neural Turing machines (neural networks that can access external memory like a conventional Turing machine),[9], resulting in a computer that loosely resembles short-term memory in the human brain.[10][11]

DeepMind made headlines in 2016 after its AlphaGo program beat a human professional Go player Lee Sedol, a world champion, in a five-game match, which was the subject of a documentary film.[12] A more general program, AlphaZero, beat the most powerful programs playing gochess and shogi (Japanese chess) after a few days of play against itself using reinforcement learning.[13] In 2020, DeepMind made significant advances in the problem of protein folding with AlphaFold.[14] In July 2022, it was announced that over 200 million predicted protein structures, representing virtually all known proteins, would be released on the AlphaFold database.[15][16]

DeepMind posted a blog post on 28 April 2022 on a single visual language model (VLM) named Flamingo that can accurately describe a picture of something with just a few training images.[17][18] In July 2022, DeepMind announced the development of DeepNash, a model-free multi-agent reinforcement learning system capable of playing the board game Stratego at the level of a human expert.[19] The company merged with Google AI's Google Brain division to become Google DeepMind in April 2023.

 

History[edit]

The start-up was founded by Demis HassabisShane Legg and Mustafa Suleyman in September 2010.[20][21] Hassabis and Legg first met at the Gatsby Computational Neuroscience Unit at University College London (UCL).[22]

Demis Hassabis has said that the start-up began working on artificial intelligence technology by teaching it how to play old games from the seventies and eighties, which are relatively primitive compared to the ones that are available today. Some of those games included BreakoutPong and Space Invaders. AI was introduced to one game at a time, without any prior knowledge of its rules. After spending some time on learning the game, AI would eventually become an expert in it. “The cognitive processes which the AI goes through are said to be very like those of a human who had never seen the game would use to understand and attempt to master it.”[23] The goal of the founders is to create a general-purpose AI that can be useful and effective for almost anything.

Major venture capital firms Horizons Ventures and Founders Fund invested in the company,[24] as well as entrepreneurs Scott Banister,[25] Peter Thiel,[26] and Elon Musk.[27] Jaan Tallinn was an early investor and an adviser to the company.[28] On January 26, 2014, Google confirmed its acquisition of DeepMind for a price reportedly ranging between $400 million and $650 million.[29][30][31][32][33][34] and that it had agreed to take over DeepMind Technologies. The sale to Google took place after Facebook reportedly ended negotiations with DeepMind Technologies in 2013.[35] The company was afterwards renamed Google DeepMind and kept that name for about two years.[36]

In 2014, DeepMind received the "Company of the Year" award from Cambridge Computer Laboratory.[37]

In September 2015, DeepMind and the Royal Free NHS Trust signed their initial Information Sharing Agreement (ISA) to co-develop a clinical task management app, Streams.[38]

After Google's acquisition the company established an artificial intelligence ethics board.[39] The ethics board for AI research remains a mystery, with both Google and DeepMind declining to reveal who sits on the board.[40] DeepMind has opened a new unit called DeepMind Ethics and Society and focused on the ethical and societal questions raised by artificial intelligence featuring prominent philosopher Nick Bostrom as advisor.[41] In October 2017, DeepMind launched a new research team to investigate AI ethics.[42][43]

In December 2019, co-founder Suleyman announced he would be leaving DeepMind to join Google, working in a policy role.[44]

In April 2023, DeepMind merged with Google AI's Google Brain division to form Google DeepMind, as part of the company's continued efforts to accelerate work on AI in response to OpenAI's ChatGPT.[45] This marked the end of a years-long struggle from DeepMind executives to secure greater autonomy from Google.[46]

Products and technologies[edit]

According to the company's website, DeepMind Technologies' goal is to combine "the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms".[47]

Google Research released a paper in 2016 regarding AI safety and avoiding undesirable behaviour during the AI learning process.[48] Deepmind has also released several publications via its website.[49] In 2017 DeepMind released GridWorld, an open-source testbed for evaluating whether an algorithm learns to disable its kill switch or otherwise exhibits certain undesirable behaviours.[50][51]

In July 2018, researchers from DeepMind trained one of its systems to play the computer game Quake III Arena.[52]

As of 2020, DeepMind has published over a thousand papers, including thirteen papers that were accepted by Nature or Science.[citation needed] DeepMind received media attention during the AlphaGo period; according to a LexisNexis search, 1842 published news stories mentioned DeepMind in 2016, declining to 1363 in 2019.[53]

Deep reinforcement learning[edit]

As opposed to other AIs, such as IBM's Deep Blue or Watson, which were developed for a pre-defined purpose and only function within that scope, DeepMind claims that its system is not pre-programmed: it learns from experience, using only raw pixels as data input. Technically it uses deep learning on a convolutional neural network, with a novel form of Q-learning, a form of model-free reinforcement learning.[36][54] They test the system on video games, notably early arcade games, such as Space Invaders or Breakout.[54][55] Without altering the code, the AI begins to understand how to play the game, and after some time plays, for a few games (most notably Breakout), a more efficient game than any human ever could.[55]

In 2013, DeepMind published research on an AI system that could surpass human abilities in games such as PongBreakout and Enduro, while surpassing state of the art performance on SeaquestBeamrider, and Q*bert.[56][57] This work reportedly led to the company's acquisition by Google.[8] DeepMind's AI had been applied to video games made in the 1970s and 1980s; work was ongoing for more complex 3D games such as Quake, which first appeared in the 1990s.[55]

In 2020, DeepMind published Agent57,[58][59] an AI Agent which surpasses human level performance on all 57 games of the Atari2600 suite.[60]

AlphaGo and successors[edit]

In 2014, the company published research on computer systems that are able to play Go.[61]

In October 2015, a computer Go program called AlphaGo, developed by DeepMind, beat the European Go champion Fan Hui, a 2 dan (out of 9 dan possible) professional, five to zero.[62] This was the first time an artificial intelligence (AI) defeated a professional Go player.[63] Previously, computers were only known to have played Go at "amateur" level.[62][64] Go is considered much more difficult for computers to win compared to other games like chess, due to the much larger number of possibilities, making it prohibitively difficult for traditional AI methods such as brute-force.[62][64]

In March 2016 it beat Lee Sedol—a 9th dan Go player and one of the highest ranked players in the world—with a score of 4–1 in a five-game match.

In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie, who at the time continuously held the world No. 1 ranking for two years.[65][66] It used a supervised learning protocol, studying large numbers of games played by humans against each other.[67]

In 2017, an improved version, AlphaGo Zero, defeated AlphaGo 100 games to 0. AlphaGo Zero's strategies were self-taught. AlphaGo Zero was able to beat its predecessor after just three days with less processing power than AlphaGo; in comparison, the original AlphaGo needed months to learn how to play.[68]

Later that year, AlphaZero, a modified version of AlphaGo Zero but for handling any two-player game of perfect information, gained superhuman abilities at chess and shogi. Like AlphaGo Zero, AlphaZero learned solely through self-play.

DeepMind researchers published a new model named MuZero that mastered the domains of Gochessshogi, and Atari 2600 games without human data, domain knowledge, or known rules.[69][70]

Researchers applied MuZero to solve the real world challenge of video compression with a set number of bits with respect to Internet traffic on sites such as YouTubeTwitch, and Google Meet. The goal of MuZero is to optimally compress the video so the quality of the video is maintained with a reduction in data. The final result using MuZero was a 6.28% average reduction in bitrate.[71][72]

In October 2022, DeepMind unveiled a new version of AlphaZero, called AlphaTensor, in a paper published in Nature.[73][74] The version discovered a faster way to perform matrix multiplication – one of the most fundamental tasks in computing – using reinforcement learning.[73][74] For example, AlphaTensor figured out how to multiply two mod-2 4x4 matrices in only 47 multiplications, unexpectedly beating the 1969 Strassen algorithm record of 49 multiplications.[75]

Technology[edit]

AlphaGo technology was developed based on the deep reinforcement learning approach. This makes AlphaGo different from the rest of AI technologies on the market. With that said, AlphaGo's ‘brain’ was introduced to various moves based on historical tournament data. The number of moves was increased gradually until it eventually processed over 30 million of them. The aim was to have the system mimic the human player and eventually become better. It played against itself and learned not only from its own defeats but wins as well; thus, it learned to improve itself over the time and increased its winning rate as a result.[citation needed]

AlphaGo used two deep neural networks: a policy network to evaluate move probabilities and a value network to assess positions. The policy network trained via supervised learning, and was subsequently refined by policy-gradient reinforcement learning. The value network learned to predict winners of games played by the policy network against itself. After training, these networks employed a lookahead Monte Carlo tree search (MCTS), using the policy network to identify candidate high-probability moves, while the value network (in conjunction with Monte Carlo rollouts using a fast rollout policy) evaluated tree positions.[76]

AlphaGo Zero was trained using reinforcement learning in which the system played millions of games against itself. Its only guide was to increase its win rate. It did so without learning from games played by humans. Its only input features are the black and white stones from the board. It uses a single neural network, rather than separate policy and value networks. Its simplified tree search relies upon this neural network to evaluate positions and sample moves. A new reinforcement learning algorithm incorporates lookahead search inside the training loop.[76] AlphaGo Zero employed around 15 people and millions in computing resources.[77] Ultimately, it needed much less computing power than AlphaGo, running on four specialized AI processors (Google TPUs), instead of AlphaGo's 48.[78]

AlphaFold[edit]

In 2016, DeepMind turned its artificial intelligence to protein folding, a long-standing problem in molecular biology. In December 2018, DeepMind's AlphaFold won the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP) by successfully predicting the most accurate structure for 25 out of 43 proteins. “This is a lighthouse project, our first major investment in terms of people and resources into a fundamental, very important, real-world scientific problem,” Hassabis said to The Guardian.[79] In 2020, in the 14th CASP, AlphaFold's predictions achieved an accuracy score regarded as comparable with lab techniques. Dr Andriy Kryshtafovych, one of the panel of scientific adjudicators, described the achievement as "truly remarkable", and said the problem of predicting how proteins fold had been "largely solved".[80][81][82]

In July 2021, the open-source RoseTTAFold and AlphaFold2 were released to allow scientists to run their own versions of the tools. A week later DeepMind announced that AlphaFold had completed its prediction of nearly all human proteins as well as the entire proteomes of 20 other widely studied organisms.[83] The structures were released on the AlphaFold Protein Structure Database. In July 2022, it was announced that the predictions of over 200 million proteins, representing virtually all known proteins, would be released on the AlphaFold database.[15][16]

WaveNet and WaveRNN[edit]

In 2016, DeepMind introduced WaveNet, a text-to-speech system. It was originally too computationally intensive for use in consumer products, but in late 2017 it became ready for use in consumer applications such as Google Assistant.[84][85] In 2018 Google launched a commercial text-to-speech product, Cloud Text-to-Speech, based on WaveNet.[86][87]

In 2018, DeepMind introduced a more efficient model called WaveRNN co-developed with Google AI.[88][89] In 2020 WaveNetEQ, a packet loss concealment method based on a WaveRNN architecture, was presented.[90] In 2019, Google started to roll WaveRNN with WavenetEQ out to Google Duo users.[91]

AlphaStar[edit]

In 2016, Hassabis discussed the game StarCraft as a future challenge, since it requires strategic thinking and handling imperfect information.[92]

In January 2019, DeepMind introduced AlphaStar, a program playing the real-time strategy game StarCraft II. AlphaStar used reinforcement learning based on replays from human players, and then played against itself to enhance its skills. At the time of the presentation, AlphaStar had knowledge equivalent to 200 years of playing time. It won 10 consecutive matches against two professional players, although it had the unfair advantage of being able to see the entire field, unlike a human player who has to move the camera manually. A preliminary version in which that advantage was fixed lost a subsequent match.[93]

In July 2019, AlphaStar began playing against random humans on the public 1v1 European multiplayer ladder. Unlike the first iteration of AlphaStar, which played only Protoss v. Protoss, this one played as all of the game's races, and had earlier unfair advantages fixed.[94][95] By October 2019, AlphaStar had reached Grandmaster level on the StarCraft II ladder on all three StarCraft races, becoming the first AI to reach the top league of a widely popular esport without any game restrictions.[96]

AlphaCode[edit]

In 2022, DeepMind unveiled AlphaCode, an AI-powered coding engine that creates computer programs at a rate comparable to that of an average programmer, with the company testing the system against coding challenges created by Codeforces utilized in human competitive programming competitions.[97] AlphaCode earned a rank equivalent to 54% of the median score on Codeforces after being trained on GitHub data and Codeforce problems and solutions. The program was required to come up with a unique solution and stopped from duplicating answers.

Gato[edit]

Gato is a "generalist agent" that learns multiple tasks simultaneously.

RoboCat[edit]

Miscellaneous contributions to Google[edit]

Google has stated that DeepMind algorithms have greatly increased the efficiency of cooling its data centers.[98] In addition, DeepMind (alongside other Alphabet AI researchers) assists Google Play's personalized app recommendations.[86] DeepMind has also collaborated with the Android team at Google for the creation of two new features which were made available to people with devices running Android Pie, the ninth installment of Google's mobile operating system. These features, Adaptive Battery and Adaptive Brightness, use machine learning to conserve energy and make devices running the operating system easier to use. It is the first time DeepMind has used these techniques on such a small scale, with typical machine learning applications requiring orders of magnitude more computing power.[99]

Sports[edit]

DeepMind researchers have applied machine learning models to the sport of football, often referred to as soccer in North America, modelling the behaviour of football players, including the goalkeeper, defenders, and strikers during different scenarios such as penalty kicks. The researchers used heat maps and cluster analysis to organize players based on their tendency to behave a certain way during the game when confronted with a decision on how to score or prevent the other team from scoring.

The researchers mention that machine learning models could be used to democratize the football industry by automatically selecting interesting video clips of the game that serve as highlights. This can be done by searching videos for certain events, which is possible because video analysis is an established field of machine learning. This is also possible because of extensive sports analytics based on data including annotated passes or shots, sensors that capture data about the players movements many times over the course of a game, and game theory models.[100][101]

Archaeology[edit]

Google has unveiled a new archaeology document program named Ithaca after the home island of mythical hero Odysseus.[citation needed] The deep neural network helps researchers restore the empty text of damaged documents, identify the place they originated from, and give them a definite accurate date.[citation needed] The work builds on another text analysis network named Pythia.[102] Ithaca achieves 62% accuracy in restoring damaged texts and 71% location accuracy, and has a dating precision of 30 years.[citation needed] The tool has already been used by historians and ancient Greek archaeologists to make new discoveries in ancient Greek history.[citation needed] The team is working on extending the model to other ancient languages, including DemoticAkkadianHebrew, and Mayan.[103]

Sparrow[edit]

Sparrow is an artificial intelligence-powered chatbot developed by DeepMind to build safer machine learning systems by using a mix of human feedback and Google search suggestions.[104]

Chinchilla AI[edit]

Chinchilla AI is a language model developed by DeepMind.[105]

DeepMind Health[edit]

In July 2016, a collaboration between DeepMind and Moorfields Eye Hospital was announced to develop AI applications for healthcare.[106] DeepMind would be applied to the analysis of anonymised eye scans, searching for early signs of diseases leading to blindness.

In August 2016, a research programme with University College London Hospital was announced with the aim of developing an algorithm that can automatically differentiate between healthy and cancerous tissues in head and neck areas.[107]

There are also projects with the Royal Free London NHS Foundation Trust and Imperial College Healthcare NHS Trust to develop new clinical mobile apps linked to electronic patient records.[108] Staff at the Royal Free Hospital were reported as saying in December 2017 that access to patient data through the app had saved a ‘huge amount of time’ and made a ‘phenomenal’ difference to the management of patients with acute kidney injury. Test result data is sent to staff's mobile phones and alerts them to changes in the patient's condition. It also enables staff to see if someone else has responded, and to show patients their results in visual form.[109][unreliable source?]

In November 2017, DeepMind announced a research partnership with the Cancer Research UK Centre at Imperial College London with the goal of improving breast cancer detection by applying machine learning to mammography.[110] Additionally, in February 2018, DeepMind announced it was working with the U.S. Department of Veterans Affairs in an attempt to use machine learning to predict the onset of acute kidney injury in patients, and also more broadly the general deterioration of patients during a hospital stay so that doctors and nurses can more quickly treat patients in need.[111]

DeepMind developed an app called Streams, which sends alerts to doctors about patients at risk of acute kidney injury.[112] On 13 November 2018, DeepMind announced that its health division and the Streams app would be absorbed into Google Health.[113] Privacy advocates said the announcement betrayed patient trust and appeared to contradict previous statements by DeepMind that patient data would not be connected to Google accounts or services.[114][115] A spokesman for DeepMind said that patient data would still be kept separate from Google services or projects.[116]

NHS data-sharing controversy[edit]

In April 2016, New Scientist obtained a copy of a data sharing agreement between DeepMind and the Royal Free London NHS Foundation Trust. The latter operates three London hospitals where an estimated 1.6 million patients are treated annually. The agreement shows DeepMind Health had access to admissions, discharge and transfer data, accident and emergency, pathology and radiology, and critical care at these hospitals. This included personal details such as whether patients had been diagnosed with HIV, suffered from depression or had ever undergone an abortion in order to conduct research to seek better outcomes in various health conditions.[117][118]

A complaint was filed to the Information Commissioner's Office (ICO), arguing that the data should be pseudonymised and encrypted.[119] In May 2016, New Scientist published a further article claiming that the project had failed to secure approval from the Confidentiality Advisory Group of the Medicines and Healthcare products Regulatory Agency.[120]

In 2017, the ICO concluded a year-long investigation that focused on how the Royal Free NHS Foundation Trust tested the app, Streams, in late 2015 and 2016.[121] The ICO found that the Royal Free failed to comply with the Data Protection Act when it provided patient details to DeepMind, and found several shortcomings in how the data was handled, including that patients were not adequately informed that their data would be used as part of the test. DeepMind published its thoughts[122] on the investigation in July 2017, saying “we need to do better” and highlighting several activities and initiatives they had initiated for transparency, oversight and engagement. This included developing a patient and public involvement strategy[123] and being transparent in its partnerships.

In May 2017, Sky News published a leaked letter from the National Data Guardian, Dame Fiona Caldicott, revealing that in her "considered opinion" the data-sharing agreement between DeepMind and the Royal Free took place on an "inappropriate legal basis".[124] The Information Commissioner's Office ruled in July 2017 that the Royal Free hospital failed to comply with the Data Protection Act when it handed over personal data of 1.6 million patients to DeepMind.[125]

DeepMind Ethics and Society[edit]

In October 2017, DeepMind announced a new research unit, DeepMind Ethics & Society.[126] Their goal is to fund external research of the following themes: privacy, transparency, and fairness; economic impacts; governance and accountability; managing AI risk; AI morality and values; and how AI can address the world's challenges. As a result, the team hopes to further understand the ethical implications of AI and aid society to seeing AI can be beneficial.[127]

This new subdivision of DeepMind is a completely separate unit from the partnership of leading companies using AI, academia, civil society organizations and nonprofits of the name Partnership on Artificial Intelligence to Benefit People and Society of which DeepMind is also a part.[128] The DeepMind Ethics and Society board is also distinct from the mooted AI Ethics Board that Google originally agreed to form when acquiring DeepMind.[129]

DeepMind Professors of machine learning[edit]

DeepMind sponsors three chairs of machine learning:

  1. At the University of Cambridge, held by Neil Lawrence,[130] in the Department of Computer Science and Technology,
  2. At the University of Oxford, held by Michael Bronstein,[131] in the Department of Computer Science, and
  3. At the University College London, held by Marc Deisenroth,[132] in the Department of Computer Scie
google-site-verification=ggge3DPLq_ejdnGjVpjUlsZRzqijY9n34uoLZmNxUXc

https://www.MrFixItDeepMind.com

-------------------------------------------------------------------------

Can't fixstupid  but  MrFixIt  does  FIX  the  PROBLEM !

MrFIxIt.ai

640 GLOBAL DOMAINS.

www.MrFixItDeepMind.com

For the advancement of MrFixIt.AI and a virtual ChatBot and 

.Ai  &  MrFixIt Virtual Animated Avitars

TJ@MrFixIt.Ai

TJ Hammons
107 1/2 East Main Street
Norman, Oklahoma    73069
405-215-5985