260SmithWatt 70Neumann 50F.Abed , AI20s.com Fei-Fei Li, Zbee

HumansAI.com NormanMacrae.net AIGames.solar EconomistDiary.com Abedmooc.com

ai celebrating greatest (ie most good for 8 bn beings) human intelligences (1950-2030)

Happy 2024 from AIGames year 1 of intelligence as everyone's game -as yet we recommend attending to next brain-leaps by Li, Lecun and Hassabis first; (as well as sectir checking) we hope by xmas 2025 to offer 361 mosaic of brains to celebrate personal education agency with

11/21 breaking - clara shih ceo salesforce ai interview of Fei-Fei Li

breaking nov- uk producer of bletchley ai summit mp donelan- week in dc l ... start of ai world series  -also AI-secret-life-Rowan-Atkinson.docx & other UK AI Greats

- some big turning points in fei-fei li world of intelligence clarified nov 2023 book worlds i see p100

also of huge interest, nov 2023,  bill gates has come out and says he now sees ai will offer personalised learning agency to every 21st c 2020s+ student - there are about 20 (societal not tech) blocks  this happening that american education system must be transparent about (eg many of these system failings go back decades) having distracted from since dad and my book 2025 report (written 1983 - 33rd year of the economist as student of von neumann -making the case that transformation of education would determine whether 8 billion humans unite beyond extinction chris.macrae@yahoo.co.uk - more at our series of linkedin articles ed3envoyun eg 1

 AIgamesbookpages1-2.docx  Notation - years like 1984 (jobs, gates, imagining leaps web 1-2-3) denote an intelligence crossroads that needs worldwide recursive attention opposite to assuming that the way ahead raced into was goof enough for societal democracy let alone legislative good; Indeed, as early as 1976 The Economist's survey Entrepreneurial Revolution  concluded from 1951=1975 as) the first quarter of a century of post-industrial revolution that political parties segmentation of left (labor) versus right  were no longer fit for smart democracy system design

sometimes many years of work/learning had preceded this; we'd like to hear from you particularly if you think an intelligence date-stamped eg jobs84, gates 84 should be denoted by an earlier year breakthrough; one exception in the case of neumann einstein turing we start at '51 their co-creation of brainworkers' engine type 6 (we understand they had all made heroic innovations for allies to win the wat and in einstein's cases had made energy and other engineering/science breakthroughs from early 20s; but it was 1951 that neumann widely briefed media including dad Economist's Norman Macrae etc on brainworking engines ; for our purposes the un and through its sister ITU agencu=y of all telecoms companies launched 3g in 2001; and 4g in 2009 (- both multiplied data transmission by at least 100 fold but 4g also wifi's data up and down from every gps in satellite range); seeing 5g futures would risk locally deep controverses such as whether  climate brainworking cooperation was  actionably intended by the 193 nations declaration 2015 of 17 sdgs - 

The Net (Neumann-Einstein-Turing) had started machines for brainworkers by 1951- as well as hardware their fav language model was Neural Networks -see neumann's last notes Computer & Brain)- BARD SAYS 

The concept of neural networks was first introduced in the 1940s by Warren McCulloch and Walter Pitts.

McCulloch was a neurophysiologist and Pitts was a mathematician. They were interested in developing a mathematical model of the brain that could be used to simulate human intelligence.

McCulloch and Pitts developed a simple model of a neuron that consisted of a weighted sum of inputs and a threshold function. The neuron would fire if the weighted sum of inputs was greater than the threshold function.

oddly any valuetrue sense of man-made intelligence revolutions as being by and for the peoples did not massively value NN again until 2009 in spite of Yan Lecun 1980s contributions

in 2009 two new phds on neuroscience foresaw likelihood that computer visioning brilliance depend on huge dataset assembly and training - fei-fei li dared the academic work to build gigantic imagenet and make it pivotal to a decade of annual progress competitions; hassabis and fei-fei li celebrated each other's persistence- today leading science ai connects around hassabis (Alphafold2 open sources 20000 proteins); the first proof of deep learning object recognition  came at 2012 entry to imagent from canadian students of Hinton

hence we recommend you know contexts shaped by

Li Fei Fei 1

Hassabis Demis  -co-founder Shane Legg

Hinton Geoffrey  or Bengio Yoshua 1

1 Lecun Yann 1  Convolution Net   arguably senior ny ac to see - currently in nyu with facebook, while courant institure supports deep mind google and womens engineers supported by ms t2 brooklyn

Ng Andrew- see deep learning ai short courses   

& koller Daphne; things briadened a lot with nlp/chat models but eg star with Aidan Gomez

p175 of fei-fei li book confirms first deep code stanford people liked ng, koller daphne. and trun sebastien - then the 3 became 4 with DAlly Bill chair of comp sciemce asking do you want to brim your lab to stanford

(interesting corporate - google trio pichai, brin page all stanford alumni, ceos nvidia and AI2 (Alan institute 2 is on ai alongside first institute on bio-health)  - wolfram

msT1 brooklyn - first sponsor of neuroscience schools 1 stanford, yale, elsewhere; owner ny liberty brooklyn basketball (2023 2nd in 2023 wbna to aja wilson la aces); governor barclay centre; 50 million dollar philanthropy of colored social networks; with husband supports many ny arts nets; on stanford trustee board; ny asia society laureates- its vital un has ny suburb demonstrating sdgs ai to east coast usa; ...




Interpersonally I dont see humans being sustainable by 2030 or ever without clearly seeing what system maps Neumann-Einstein-Turing connected for humanity. Most of the mathematical lives' times of the NET was gravitated by the experiences of being 2 immigrants to princeton from Europe's epicentre of war ; they won the science races integral to supporting the allies in winning what had generated the exponentially terrible mess of how inequitable the furst 5 engines tyoes had been applied by a few dominant empires who carved up nature's resources and classes of people (eg ranging from free to slaves to genocided against).

From the end of world war2 neumann led the agenda that the net needed to give the world machine type 6 - intelligence machiines for brainworking (see his final lectures: computer & the brain. Tragically none of the net had more than 12 years of life to plant this post-industrial revolution

The worlds they foresaw were about far more than the hardware of computers or binary coding . How did the humai brain sense the world- with sight occupying the miot central caoacity of the senses. How did languages code the whole deep diversirt oif growing up with natures in different hemisphers let alone the emergence of data beamed uo and down from GPS. As a transparency note,I may be biassed as with The Economist (and the scots system philosophy of moral sentiments economics starting with smith 1758 - his 2 main books being the last decsription of man-made markets and natural focres before engines; and a 16 yera review of howe place strateguies chnaged as engineers sperad innovation) my family became the biographer of neumann's innocation agendas for humanity.

These notes from bard on the linguistic ans social differences of The Net also provide insights on how their work's intersctions helped to bust almost every academic silo of so-called scientific man. Noite in partular chaos theiry which guages risks od science itself bing a fal approximation vis a vis interconnectedness of systems on natures overall platforms earth and space.

Albert Einstein and John von Neumann were both multilingual. Einstein spoke German, French, English, and Italian. Von Neumann spoke Hungarian, German, English, French, and Greek.

Einstein was born into a German-speaking Jewish family in Ulm, Germany. He learned to speak German as his first language. He also learned to speak English and French as a child. He was fluent in all three languages by the time he reached adulthood.

Von Neumann was born into a Hungarian-Jewish family in Budapest, Hungary. He learned to speak Hungarian and German as a child. He also learned to speak English, French, and Greek as a teenager. He was fluent in all five languages by the time he reached adulthood.

It is not clear whether Einstein and von Neumann learned their second languages at home or at school. However, it is clear that they were both exposed to multiple languages from a young age. This exposure likely helped them to develop a strong ability for language learning.

Multilingualism is a valuable skill. It can help people to communicate with others from different cultures, to access information and resources in different languages, and to learn about different cultures. Einstein and von Neumann were both brilliant scientists, and their multilingualism likely played a role in their success. Turing only spoke English fluently. There is no record of him speaking any other language fluently, although he did learn some French and German during his school years.

Turing was a brilliant mathematician and computer scientist, and his work laid the foundation for the modern field of artificial intelligence. He was also a very private person, and he did not socialize much. This may have limited his opportunities to practice speaking other languages

if you want to help un "AI the sdgs now that's possible with LLMediation) then our tic tac toe game board blends womens AI Abed Fazle, Jobs Steve (from 2001 inspiring valley to rethink whether it had human dev purpose) Gates Melinda, Chan Priscila -the best intelligence books for girls and everyone through school year 23-24

youth ai adds Stanfords Ng Andrew, Yang Jerry, and Brooklyn's Ms T&T

health foundation model & govv2.0 intels add KimJY &   Gates Bill together with ,  Guterres videoAntonio (1 2) &, Sheika Moza (please note we are taking a primarily western view in this thread; we'd start with eg jack ma if  free .to be a worldwide youth mentor not the west's political pawn..)

the ceos who world needs to demand best not shortest AI gains from appear to be Nvidia, Ai2, Google (Pichai Brin Page)

THEY KNEW https://www.c-span.org/video/?447599-1/artificial-intelligence - brockman greg co-founder open ai testified with fei=fei li congress june 2018 that trillion times more tech compute would be put into chats etc by 2028 changing education and everything imaginable https://www.c-span.org/video/?447599-1/artificial-intelligence- compare this with neumann/economist survey The Economist 1951-1984 which hypothesised trillion time more would take 60 years; we updated this from 1984 in 2025 reporting genre but trillion times? no wonder we are facing what hinton/li call 2 buckets - catatrophes eg climate if it went wring might wipe out a billion with each meta-disater, extinction- the us political scene has one engineering doctorate among ist top 500 people - they probably cannot see any deep human future unless movements demand they put some magic spex on

Jacobs is a canadian whose radical ventures fund is filling space now hinton has retired to london with canadian venture park alongside U of Toronto - ss his hosting of the key reiew of how hinton and li see the build up to today's unprecdented yerr of AI attention

marcus gary help fei-fei li publish first booklet demonstrating what national ai advisory bodies to gov could look like

below please find our alphabetic catalogue

Abed epicentre of women empowerment intel over 50 years linking up to billion poorest asian mothers networks - for quarter century without electricity:Steve Jobs hosted silicon valley 65th birthday wish party abed 2001; neither silicon valley or death of women empowerment partnerships have been same since abedmooc.com

ackoff- personally i find ackoff's simple/behavioral checklist of broken system exponentail risks essential to factor into civil debates from 1984 get go of personal intel networking rising exponential - eg he defines a broken system as one where the harder the systems historic professional/experts try the more chaos/conflict they cause- when you consider future history legacy of 1950s NE (Neumann-Einstein-Turing) it was possible to imagineer a stage perhaps 4g telecoms and brainworking engines where designing 193 nations unity was sufficient to unite 8 billion brainworkers; in a sense those behaving as historically certified public servants within a nation were likely to multiply conflicts; the economist foresaw need to proactively debate this in 25th year mediating the net's legacy where survey EntrepreneurialRevolution.city introduced 2 core ideas - historic politicking between left (labor) and right  will become meaningless in terms of renewing generations and then extinction-risking oroblem; next caoaitalism  (including efinance, egov, e-health -any deep society space and data mapping) needs to empower local community resliency at every family-building gps not just global or 193 national policy makers big media budgets; architecurally digital mediation ; intergenerational investment will need changes such as green-scale-actions not just greenwashing 

Alrman Russ hai video 31 oct Professor of Bioengineering, of Genetics, of Medicine (General Medical Discipline), of Biomedical Data Science, and, by courtesy, of Computer Science, Stanford University; Associate Director, Stanford Institute for Human-Centered Artificial Intelligence

Anderson Ray. arguably Ray Anderson benchmarked more than any turn of millennium fortune 5000 industrial sector ceo to show how a sector can profitably lead going green- but it took a decade long model and redesigned value chain architecture of suppliers towards circularity- Ray did this for the sector of carpet tiles gravitated out of his home state of Georgia- click to linkin to practising Ray's Intelligence

Andreessen serial goodtech entrepreneur/venturer in Valley since 1993 -fascinating 2023 dialogue with Condoleezza Rice

Attenborough David, arguably attenborough did more than any media influencer to argue nature's case as primary evolutionary force on earth; part of foreseeing consequeces of the NETS gift has been imagineering how transforming to a global village world means that humans are for first time playing real-time on nature's scale; if we irrersibly damage her system desins all civilisation will collapse simultaneously instead of hostory's separed collapses. our 1984 book 2025 report argued the bbc and all public briadcasrers wound have unique role to ttansition millennials life -sadly in spite of Attenborough's effors the totality of the bbc's purpose never valued this unique opporty=unity to celebrate queens english inettligence mapmakers, Attenborouh's brother Richard ditected the film Gandhi and supported UK Gangian intelligence associations in line with Einstein's advocacy of Gandhi as being the benchmark of sustainbility generation leaders In playing AIGames, we ask that your first move lists who's human intelligences you see as most advancing humanity since 1950. Of course a supplementary game would extend to intelligence legacies co-created before 1950 with Gandhi a prime source. Within this catalogue educators activating Gang=hian logics include Montessori & Freire- and overall Einstein's intents for future society.

Benioff 1 corporate leadership ai salesforce 

 berners lee 89 father www but why did he move from swizerland where G standards set to Boston and not west coast valley; extrordinaly brave work keeping open whereve big telecom tried to close but ultimately learbing web got 99% taken over by comerce web until human ai leaders reintegrated neural network a society's comeback in system design- see also negropronte mit media lab who seems to have had drams behind the scale that took over (of course some can argue that commercial monoplisation of digital would have been worse without this couple)

borlaug- 80 year cooperation action learning  curve still centre of gravity og sdg2 intelligence:billions of people were likely saved from starvation by borlaug's transformation of local agricultural producivity- The Economist once celebrated "the happiest graph in the world" as illustrating how Japan (first to update agri systems with borlaug and industrial systems with deming)  was prepared to share how its local farmers procuced 15 times more rice than eg cambodia all across the far east tropics the region with lowest life expectancy due to nutritional and dehydration crises causing up to one third of all infants to die; seed science tech offered microfranchising solutions-eg local rice production efficiency being the largest franchise replication across Asia's two thirds of human beings; given this momentum took off before the NET's gift of brainworking engines, seeing how life critical knowhow links beyond nations boundaries makes borlaug (eg global food prize) alumni an essential case catalogue for all intelligence designers 

Brilliant 1 so many passion stories why epidemiologists need a fan club even more than football (re your fav global entertainment); look at 3 worlds brilliant  life experience helps us see- how did brilliant end smallpox across asia before days of computer mapping - ie what manual network blueprint beat this infectious diseases; brilliant's deepest passion was ending unnecessary blindness - see his videos on india's aravind modeling its local solutions franchise on the consistency of mcdonalds; brilliant had started semi-0retirement on west coast after working life in asia when his ted talk demanded vieus chasing be a priority app of artificial intel algorithm experts; he was appointed as first ceo of google.org; the stories of how the valley's consciousness changed after steve jobs hosted fazle abed's 65th birthday wish party 2001 included why epidemiologists were called on to train vice chancellors of the new global university shared alumni of sdg graduates; fortunately there had also been one us university benefiting from design by epidemiologist - ie swarthmore; more generally fans od last mile health intelligence estimate the world is currently short of training 100 million last mile heath servants - see also glasgow adam smith scholars attempts to design virtually free nursing colleges; or tuition free med school at nyu; when larry was a 20 something he walked into a detroit training hospital with some dehydration; was asked if he'd also like to train to be a doctor with zero risk of student debt; in that regard us higher ed was far smarter in sustaining next generation 1960s than it is today; one more story brilliant's first job medical adviser to the band wavy gravy; after a hard year's tour circa 1968 the band went to meditate with their fav guru in the afghan hills; in those days brilliant recalls the grapevine even in the worlds most remote places was full of positive chat if americans can land on moon, soon no humaitarian mission down on earth will be impossible

 Brynjolfsson Erik western macroeconomics (as keynes feared - see warning last chapter general theory has failed 21st c humans on all exponential sustainability challenges- erik is one of the few who may be able to translate how ai decisions niw dwarf anything economis... erik, reich robert and Amy Zegart who works with Rice Condoleeza (Hoover Stanford)

chang maurice - one of the 4 western acknowledged greats in chip design making taiwan largest chip manufacturer until nvidia's recent challenge - of course intel started chips as a 100 times multipliuer per decade of brain machines; arm in uk may sneak in 4th; 

chen jerry 1, greylock

csikszentmihalyi https://www.google.com/search?q=csikszentmihalyi+ai Prof csik's reserach at claremont on genius showed they maximise time spent at experiential edge of their unique knowhow; 70 years ago einstein argued that transforming personalised education with AI would determine good or bad outcomes for our species. One of the biggest risks to our species appears to be educators unable to change their system and all blocks to accepting partners in transformation. The luminaries approach of hong kong yidan prize is one experiment in changing this  but open ai exchanges have run into political barriers  https://www.universitas.ai/global

Dean Jeff - founded google brain (various generative ai tools from this group include transfrmer breakthrough) ; brain and hassabis deep mind became one overall unit 2023 ; bard says dean is google contact person- eg start artists wanting to connect food and good ai

Deming e

doudno genes database crispr berkeley connector of fei-fei since 2013- a lead reviewr of ffl's world i see  (// koller daphne) -4th = womens AI 2023 (1 Fei-fe li 2 priscilla chan 3 Melinda gates)

East Hauser & Sophie Wilson put UK on the world tech map primarily with ARM chips which they conveged to make cambridge eccosystem bigger than eg oxford - see bard notes. Also arm's curent ownership somewhere between japan softbank and nviduia is in flux but nvidia has said it will maintain arm in cambridge. Notably elon musk at sunak summit called arm chips good; today hassabis deep mind may be the uk ecosystem superstar but brits can be proud of the cambridge ecosystem - particularly as things looked quite bleak when eg ICL went nowhere from the 1980s

Eshoo Anna1represents Stanford region in DC leads create AI Act

esteva andre - around 2015 helped fei-fei li monitor when computer vision of objevcts emulated human vision error rates--now in medical deep learning ai

estonia has arguably made better investment in government suppirting human intelligence than anywhere with relatively limited resources- this also reminds us that human made intelligence is not purposefully valued as an individual dynamic even if our brains are right to want to know which human is most influencing any time we spend either learning or teaching - tallin jaan is one of estonia's intel superstars

Etchmendy1 role co-founder stanford HAI appears to be making sure every discipline and discipline sponsor joins in human ai; also neighbor ro FFL

Freire - culturally credited eg Bangladesh as one of transformative education's sources for the human develpment miracle co-worked by up to billion asian village women povert alleviators; in 1960s latin americans inspired by us mission impossible moon race - debated if we're entering mission impossible ae, what root cukture unites us; franciscan led the overall consensus; freire soon became guide of radical education matching franciscan servant leadership and ecological roile of franciscan males and matenal health of clares

Fridman Lex 1- who's who of intelligence in podcast/youtube and at MIT lectures

1 gates bill 84 changed the world by commercialisimg language ie language needed to coded standard personal computers; later 3g western worlds largest giving partnership (including buffett & )

gates melinda 015 -started to design her own foundations around womens intel (both deep community health building and lead womens ai and every women celebrating womens innovations in between 

gifford-pinchot- sadly after 1976's 25th celebration of NET (Economist's Entrepeneurial Revolution) most US social entrepreneur variants missed ER's priority on scaling and livelihood transfotaion first renewable generation needed to celebrate; two exceptions gifford pinchot intrapreneurship, and those who understood transformation of aid economics arounfd microfranchising - see abed& brilliant (we are all aware of eg mcdonalds macrofranchising; microfranchising replicates team solution with as much attention as mcdonalds but ssigns all or most of value of production to local teams local community economy not suking out prof=it from local ro ever fewer global owners)

gomes aidan- in part tutured by Hinton as well as out of oxford, changes scaling leaps of gen ai with  transformers etc - aged 20 https://aidangomez.ca/ now co-founder cohere canada see linkedin https://www.linkedin.com/in/aidangomez/?originalSubdomain=ca

hoffman reid- in humansai top 50? intellectually probably superb funding of hoffmann-yee ai student projects; recall heck of a lot of us money around either through property or backing ai- not obvious hoffman in top 50 in spite of founding linkedin sold to microsoft -compare eg doehrr new climate school stanford - question which n am spaces get race to humanise ai - valley, seattle, parts of canada where's 4th

hopfield, received his A.B. from Swarthmore College in 1954, and a Ph.D. in physics from Cornell University in 1958 (supervised by Albert Overhauser). He spent two years in the theory group at Bell Laboratories, and subsequently was a faculty member at University of California, Berkeley (physics), Princeton University (physics), California Institute of Technology (chemistry and biology) and again at Princeton, where he is the Howard A. Prior Professor of Molecular Biology, emeritus. For 35 years, he also continued a strong connection with Bell Laboratories.1996: 40 years on from Von Neumann one of only 2 people to champion neural networks at yale silliman lectures

In 1986 he was a co-founder of the Computation and Neural Systems PhD program at Caltech.

hughes  nick 07 -bottom of pyramid intelligence (foundation 1 fazle abed) - origin of designing /scaling mpesa village phone out of kenya (coroprat ), teamd with quadir family to continue bottom p

ibrahim lila now coo deepmind london with hassabis previously coursera, and barrett's team intel - see keynote aigood/itu gemevat 2023 4th= womens ai

jobs steve 84 01 08

kim JY - mid 80s started www.pih.org with pail farmers 80s; farmer doing grad studies harvard anthropolgy of mefivime while shuttling to/from from haiti startup; by 2001 kim was main partnership redesigner of global fund - pih fazle abed gates bush brilliant martha chen etc - jyk sked by obama to head world bacnk 2012- immediately set about raising younger half of world voice in multilaterals; by 2016 testified how goal 2 education would happen this century unless un ed celebrated what un tech know and both adapted to last mile solution scaling/training histirically siloised by different un branches eg goal 2 food rime, goal 3 heath who geneva, golal 4 diffused between eg unesco paris, unicey ny and digitally un nowhere as of 2016- an extrardinary reversal of 1920s when einstein still coordinated future of worldwide intel coop out of europe before escaping hitler by moving to princeton  ...

Ka-Shing one of top 5 new pro-youth university designers - see building stanfird- city campuses eg ny beijing ; many partnership HK universities- arguably hong kong's greatest philanthropist of our era

koller vid1 stanford bioscience ai but also coursera cofounder ng 23 suddenly ai and quality data biotech agri climate ...

 Krizhevsky 12 Alex with Ilya  Sustever mentored by hinton won 2012 imagenet competition- leap to deep learning (Alexnet daya subset )

Kuan Yew Singapore- more intelligent leader 7 million person island ever seen- when uk dumped singapore, yew focused on 1 jobs, 2 loving every culture, 3 good homes for everyone- singapore seems to be the one city in the world where hinancial housing bubbles have never happened and nextgeneratoin citizens support each other; singapore has also morally supported 10 asean neighbors

Lee Kai-Fu - interesting that his best-seller as recently as 2017 barely covered the big ai leaps exciting everyone in 2023; to be fair many of lee's cases are asian where there has been more blending of big businss ai and societal advances but overall this book demonstrates why wizard ai breakthroughs ate as yet designed by relatively few people and specific ecosystems such as those around stanford

Liang 1  A Manning NLP stanford- foundation models have become pivotal and Percy Liang (see also 24 october gen ai jealth summit unsummitfuture.com is a s connected as anyone at stanford in maintaining the deep ethics issues these models require; he teams up with Reich who leads stanford ethics institute-- founded together ai https://together.ai/about stack of open ai tools academic led but with key industry sponsors eg nvidia see about

manyika james  1, co-chair guterres AI risk-nations panel, at google hq Senior Vice President of Technology and Society at Google -new post reporting to ceo pichai .development of Google's AI Principles, the creation of the Google AI Research team, and the launch of the Google AI for Social Good initiative.a Zimbabwean-American academic, consultant 2 decades at mckinnsey, and business executive. He is known for his research and scholarship into the intersection of technology and the economy, including artificial intelligence, robotics automation, and the future of work. 

mohammeed shakir, at deepmind - one of pioneersof decolonial ai

negropronte mit media lab including 100$ laptop- stiry media lad began when mit architectire school asked how will our future blend with coming of digital; unlike top down policy makets , architects/engineres have to get foundations deep; they have to gravitate a generation of partnerships not juts get a first 90 days of operations profitable

Owen 84 open space technology- massively connecting emotional intelligence- eg love is -at cusp of scaling greatest community building 1984 (last real only global village townhalls, first blended.. )

(previously conference organiser system transformation: eg see ackoff- originally peace corps and training to be anglican priest when action crises overtook ..)

Perona, Pietro - back in the early 2000s - one of the few open minded supervisors of doctorates on vision connections with AI - tutor fei-fei li out of caltech- see breakthrough 2005 paper

Quadir family 96 - leaders of possibility of 2g and 3g for very poorest  women who leapt into mobile coms and solar having never seen phone or electricity in grid only age -consistent supporters telenor soros legatum abdul latif -consuent entrepreneurs fazle abed, reeta roy 

Rice Condoleezza 1 at stanfords governance institute Hoover, Rice says Li's is best (intelligence) book she has ever seen  https://setr.stanford.edu/sites/default/files/2023-11/SETR_web.pdf

Roy reeta 06 with mastercard foundation out of canada where blackberry was usable - most consistent pan-african youth investor applying eg knowhow of quadirs, abed ..and since 2012 first lady qatar moza sheika

shih, dreamforce talk clara salesforce ai -see interview fei-fei li - one of best first 3 weeks of book worlds i see demonstarting depth of both clara's and fei-fei love of advancecing ai and in this case skils augmentaion - host podcast - ask more of ai (stanfird alumn)

Suleyman Mustafa 1 fascinating character -

over the next few years everybody is going to have their own person...

been at start of many deep data projects as well as co-founder deep mind which he now seems to have left- from london now in ca with hoffman investment inflection ai and new book Coming Wave (back in london would be interesting to follow up his nih ai work and association with the economist)

Sutskever Ilya 1with Krikhevsky Alex won 2012 imagenet - whence deep learning

Sutton Richard Deep Mind & Deep Learning Reinforcement book :

  • Richard S. Sutton is a Canadian computer scientist and a Professor of Computing Science and AITF Chair in Reinforcement Learning and Artificial Intelligence at the University of Alberta, and also Distinguished Research Scientist at DeepMind.
  • Andrew G. Barto is a professor of computer science at University of Massachusetts Amherst, and chair of the department since January 2007. His main research area is reinforcement learning.

Tata, in 20th c the greatest dev miracles seem to have been built by intergenerational design of inclusive business models at least 51% in trust of national advancemenr of all inluding historically most marginalised - this explains banglagesh's human dev odel eg by dazle abed and tata appears main exemplar in Indian

torvaulds kept 3g-4g age open sourced

EA- 10 years ago i was late visting uae (about 20th Asian country on my family's 60 east-eest conuntries X; lesson 1 uae hosts world class summits and then chooses innovation expertises it wants to add- it will make some great ai choices -see eg selectors in chief eg  Amar ALolama

Ultrasun Raquel 1 - now waabi & co-director vector toronto - previously chief scientist uber

Vogel - Eastern two thirds of humanity connected by consciousness culture different from west's golden rule- i fing Ezra Vogel far deepest diarist of orients last 2000 years translated for western appeciation

Wales Jimmy 01 credited with inventing wikipedia with larry sanger; demonstartes what could have been done as does eg khan academy later if the broken system (Un sibce 2016 Eg JYKim testimony sept 2016- originn of un2 roadmapping and now guterres tech envvoy which is us student endebted 21st c education had transferred 0.5% to digital every year since 1989

Williams neville us longest running green entrepreneur- carters envoy for sustainable energy follwed up by barefoot solar partnerships across many nations (see self)

Wilson picks up melinda gates relay baton with dear black girl published after worlds i see

Yat siu's approach led out of hong kong is unique and one of the few to keep nft coomunity finace valid -see yat siu entry at world record jobs

Yunus Monica 01+ after 9/11 decided to do more that develop as oper sinager starting www.singforhope.org and carefully linking in most of ny's most committes music partners (Juillard +); from 08 saw obam versus clainto versus her dad muhamad yunus fail to unite around resolb=ving subprime crisis in favor of younger half of world; has h=kepmarts out of the greenwasj=hing politics that has since spun,..to be very clear muhamad yusnu top 10 concepts of 2008- eg a shared belt road corido or myanmar bangladesh china were absolutely essential to region but the partner fudning and busienss modeling never even got started

Zhao Houlin - launched itu into aifirgood portal (cornerstone of un tech envoy emergence from 2016 when ed review of sdg4 said impossible with out redesigning un2 orund digital cooperation) and summit from 2017 during his last 6 years heading itu after decades connecting out of geneva


z corporates -i will be doing more reserch on the corporate list who attended eun ai safety summit nov 2023- I am expecting for example many fortune 500 companies to have larger ai budgets by 2027 than most nations - which is one reason why we primarily catalogue persons and their alumni who are innovation ai good- we welcome discussion if you think we are missing way to see whose intelligence worlds partnerships are designing ai to get back to the goals all 193 nations of UN promised to unite humans round in 2015 - as yet we see far more risk that nations are not going tp invest in in enough good ai than that too much ai will take over from humanity- we also remind non native english speakers - articificial applied to engines = man-made

  • Adept
  • Aleph Alpha
  • Alibaba
  • Amazon Web Services
  • Anthropic
  • Apollo Research
  • ARM
  • Cohere
  • Conjecture
  • Darktrace
  • Databricks
  • Eleuther AI
  • Faculty AI
  • Frontier Model Forum
  • Google DeepMind
  • Google
  • Graphcore
  • Helsing
  • Hugging Face
  • IBM
  • Imbue
  • Inflection AI
  • Meta
  • Microsoft
  • Mistral
  • Naver
  • Nvidia
  • Omidyar Group
  • OpenAI
  • Palantir
  • Rise Networks
  • Salesforce
  • Samsung Electronics
  • Scale AI
  • Sony
  • Stability AI
  • techUK
  • Tencent
  • Trail of Bits
  • XAI

Views: 398

Reply to This

Replies to This Discussion

below confirmed attendees ai safety summit un nov 2023

next summit korea in less than 6 months then paris in 12 months

summit debrief at brookings week of 11/13  https://twitter.com/michelledonelan

original press and musk debrief with PK Sunak

overall report of what UK AI clarified to be supplied by Bengio

https://www.gov.uk › government › publications › stat...
Nov 2, 2023  ... AI risks. As host of the AI Safety Summit,the UK has commissioned Yoshua Bengio, a Turing Award-winning AI academic and member of the UN 's ...

The Report will be published ahead of the next AI Safety Summit. In focusing on frontier AI, it is also intended to help inform and complement other international initiatives, such as those led by the United Nations, the Organisation for Economic Cooperation and Development (OECD) and the Global Partnership for AI (GPAI). Its analysis will inform the activity of the United Nations’ Artificial Intelligence Advisory Body.

As the host of this AI Safety Summit, the UK will stand up the Secretariat to support the Chair and Expert Panel.

Bill Gates now one of several people to say personalised learning coming in next 24 months in time for our 1984 predictions 2025 report chris.macrae@yahoo.co.uk

not clear to me that bletchley declaration understands neuroscience of how how foundation data like imagenet was designed and its 8 years of algorith breakthroughs were designed (see fei-fei li book worlds i see) but probably good that so many countries joined in bletchley ai declaration

Academia and civil society

  • Ada Lovelace Institute
  • Advanced Research and Invention Agency
  • African Commission on Human and People’s Rights
  • AI Now Institute
  • Alan Turing Institute
  • Algorithmic Justice League
  • Alignment Research Center
  • Berkman Center for Internet & Society, Harvard University
  • Blavatnik School of Government
  • British Academy
  • Brookings Institution
  • Carnegie Endowment
  • Centre for AI Safety
  • Centre for Democracy and Technology
  • Centre for Long-Term Resilience
  • Centre for the Governance of AI
  • Chinese Academy of Sciences
  • Cohere for AI
  • Collective Intelligence Project
  • Columbia University
  • Concordia AI
  • ETH AI Center
  • Future of Life Institute
  • Institute for Advanced Study
  • Liverpool John Moores University
  • Mila – Quebec Artificial Intelligence Institute
  • Mozilla Foundation
  • National University of Cordoba
  • National University of Singapore
  • Open Philanthropy
  • Oxford Internet Institute
  • Partnership on AI
  • RAND Corporation
  • Real ML
  • Responsible AI UK
  • Royal Society
  • Stanford Cyber Policy Institute
  • Stanford University
  • Technology Innovation Institute
  • Université de Montréal
  • University College Cork
  • University of Birmingham
  • University of California, Berkeley
  • University of Oxford
  • University of Southern California
  • University of Virginia


  • Australia
  • Brazil
  • Canada
  • China
  • France
  • Germany
  • India
  • Indonesia
  • Ireland
  • Israel
  • Italy
  • Japan
  • Kenya
  • Kingdom of Saudi Arabia
  • Netherlands
  • New Zealand
  • Nigeria
  • Republic of Korea
  • Republic of the Philippines
  • Rwanda
  • Singapore
  • Spain
  • Switzerland
  • Türkiye
  • Ukraine
  • United Arab Emirates
  • United States of America
  • Adept
  • Aleph Alpha
  • Alibaba
  • Amazon Web Services
  • Anthropic
  • Apollo Research
  • ARM
  • Cohere
  • Conjecture
  • Darktrace
  • Databricks
  • Eleuther AI
  • Faculty AI
  • Frontier Model Forum
  • Google DeepMind
  • Google
  • Graphcore
  • Helsing
  • Hugging Face
  • IBM
  • Imbue
  • Inflection AI
  • Meta
  • Microsoft
  • Mistral
  • Naver
  • Nvidia
  • Omidyar Group
  • OpenAI
  • Palantir
  • Rise Networks
  • Salesforce
  • Samsung Electronics
  • Scale AI
  • Sony
  • Stability AI
  • techUK
  • Tencent
  • Trail of Bits
  • XAI

Multilateral organisations

  • Council of Europe
  • European Commission
  • Global Partnership on Artificial Intelligence (GPAI)
  • International Telecommunication Union (ITU)
  • Organisation for Economic Co-operation and Development (OECD)
  • United Nations

transcript wolfram at ted oct 2023



How to Think Computationally about AI, the Universe and Everything

October 27, 2023

Transcript of a talk at TED AI on October 17, 2023, in San Francisco

Human language. Mathematics. Logic. These are all ways to formalize the world. And in our century there’s a new and yet more powerful one: computation.

And for nearly 50 years I’ve had the great privilege of building an ever taller tower of science and technology based on that idea of computation. And today I want to tell you some of what that’s led to.

There’s a lot to talk about—so I’m going to go quickly… sometimes with just a sentence summarizing what I’ve written a whole book about.

You know, I last gave a TED talk thirteen years ago—in February 2010—soon after Wolfram|Alpha launched.

TED Talk 2010

And I ended that talk with a question: is computation ultimately what’s underneath everything in our universe?

I gave myself a decade to find out. And actually it could have needed a century. But in April 2020—just after the decade mark—we were thrilled to be able to announce what seems to be the ultimate “machine code” of the universe.

Wolfram Physics Project

And, yes, it’s computational. So computation isn’t just a possible formalization; it’s the ultimate one for our universe.

It all starts from the idea that space—like matter—is made of discrete elements. And that the structure of space and everything in it is just defined by the network of relations between these elements—that we might call atoms of space. It’s very elegant—but deeply abstract.

But here’s a humanized representation:

A version of the very beginning of the universe. And what we’re seeing here is the emergence of space and everything in it by the successive application of very simple computational rules. And, remember, those dots are not atoms in any existing space. They’re atoms of space—that are getting put together to make space. And, yes, if we kept going long enough, we could build our whole universe this way.

Eons later here’s a chunk of space with two little black holes, that eventually merge, radiating ripples of gravitational radiation:

And remember—all this is built from pure computation. But like fluid mechanics emerging from molecules, what emerges here is spacetime—and Einstein’s equations for gravity. Though there are deviations that we just might be able to detect. Like that the dimensionality of space won’t always be precisely 3.

And there’s something else. Our computational rules can inevitably be applied in many ways, each defining a different thread of time—a different path of history—that can branch and merge:

But as observers embedded in this universe, we’re branching and merging too. And it turns out that quantum mechanics emerges as the story of how branching minds perceive a branching universe.

The little pink lines here show the structure of what we call branchial space—the space of quantum branches. And one of the stunningly beautiful things—at least for a physicist like me—is that the same phenomenon that in physical space gives us gravity, in branchial space gives us quantu....

In the history of science so far, I think we can identify four broad paradigms for making models of the world—that can be distinguished by how they deal with time.

4 paradigms

In antiquity—and in plenty of areas of science even today—it’s all about “what things are made of”, and time doesn’t really enter. But in the 1600s came the idea of modeling things with mathematical formulas—in which time enters, but basically just as a coordinate value.

Then in the 1980s—and this is something in which I was deeply involved—came the idea of making models by starting with simple computational rules and then just letting them run:

Can one predict what will happen? No, there’s what I call computational irreducibility: in effect the passage of time corresponds to an irreducible computation that we have to run to know how it will turn out.

But now there’s something even more: in our Physics Project things become multicomputational, with many threads of time, that can only be knitted together by an observer.

It’s a new paradigm—that actually seems to unlock things not only in fundamental physics, but also in the foundations of mathematics and computer science, and possibly in areas like biology and economics too.

You know, I talked about building up the universe by repeatedly applying a computational rule. But how is that rule picked? Well, actually, it isn’t. Because all possible rules are used. And we’re building up what I call the ruliad: the deeply abstract but unique object that is the entangled limit of all possible computational processes. Here’s a tiny fragment of it shown in terms of Turing machines:

OK, so the ruliad is everything. And we as observers are necessarily part of it. In the ruliad as a whole, everything computationally possible can happen. But observers like us can just sample specific slices of the ruliad.

And there are two crucial facts about us. First, we’re computationally bounded—our minds are limited. And second, we believe we’re persistent in time—even though we’re made of different atoms of space at every moment.

So then here’s the big result. What observers with those characteristics perceive in the ruliad necessarily follows certain laws. And those laws turn out to be precisely the three key theories of 20th-century physics: general relativity, quantum mechanics, and statistical mechanics and the Second Law.

It’s because we’re observers like us that we perceive the laws of physics we do.

We can think of different minds as being at different places in rulial space. Human minds who think alike are nearby. Animals further away. And further out we get to alien minds where it’s hard to make a translation.

How can we get intuition for all this? We can use generative AI to take what amounts to an incredibly tiny slice of the ruliad—aligned with images we humans have produced.

We can think of this as a place in the ruliad described using the concept of a cat in a party hat:

Zooming out, we see what we might call “cat island”. But pretty soon we’re in interconcept space. Occasionally things will look familiar, but mostly we’ll see things we humans don’t have words for.

In physical space we explore more of the universe by sending out spacecraft. In rulial space we explore more by expanding our concepts and our paradigms.

We can get a sense of what’s out there by sampling possible rules—doing what I call ruliology:

Even with incredibly simple rules there’s incredible richness. But the issue is that most of it doesn’t yet connect with things we humans understand or care about. It’s like when we look at the natural world and only gradually realize we can use features of it for technology. Even after everything our civilization has achieved, we’re just at the very, very beginning of exploring rulial space.

But what about AIs? Just like we can do ruliology, AIs can in principle go out and explore rulial space. But left to their own devices, they’ll mostly be doing things we humans don’t connect with, or care about.

The big achievements of AI in recent times have been about making systems that are closely aligned with us humans. We train LLMs on billions of webpages so they can produce text that’s typical of what we humans write. And, yes, the fact that this works is undoubtedly telling us some deep scientific things about the semantic grammar of language—and generalizations of things like logic—that perhaps we should have known centuries ago.

You know, for much of human history we were kind of like LLMs, figuring things out by matching patterns in our minds. But then came more systematic formalization—and eventually computation. And with that we got a whole other level of power—to create truly new things, and in effect to go wherever we want in the ruliad.

But the challenge is to do that in a way that connects with what we humans—and our AIs—understand.

And in fact I’ve devoted a large part of my life to building that bridge. It’s all been about creating a language for expressing ourselves computationally: a language for computational thinking.

The goal is to formalize what we know about the world—in computational terms. To have computational ways to represent cities and chemicals and movies and formulas—and our knowledge about them.

It’s been a vast undertaking—that’s spanned more than four decades of my life. It’s something very unique and different. But I’m happy to report that in what has been Mathematica and is now the Wolfram Language I think we have now firmly succeeded in creating a truly full-scale computational language.

In effect, every one of the functions here can be thought of as formalizing—and encapsulating in computational terms—some facet of the intellectual achievements of our civilization:

It’s the most concentrated form of intellectual expression I know: finding the essence of everything and coherently expressing it in the design of our computational language. For me personally it’s been an amazing journey, year after year building the tower of ideas and technology that’s needed—and nowadays sharing that process with the world on open livestreams.

A few centuries ago the development of mathematical notation, and what amounts to the “language of mathematics”, gave a systematic way to express math—and made possible algebra, and calculus, and ultimately all of modern mathematical science. And computational language now provides a similar path—letting us ultimately create a “computational X” for all imaginable fields X.

We’ve seen the growth of computer science—CS. But computational language opens up something ultimately much bigger and broader: CX. For 70 years we’ve had programming languages—which are about telling computers in their terms what to do. But computational language is about something intellectually much bigger: it’s about taking everything we can think about and operationalizing it in computational terms.

You know, I built the Wolfram Language first and foremost because I wanted to use it myself. And now when I use it, I feel like it’s giving me a superpower:

I just have to imagine something in computational terms and then the language almost magically lets me bring it into reality, see its consequences and then build on them. And, yes, that’s the superpower that’s let me do things like our Physics Project.

And over the past 35 years it’s been my great privilege to share this superpower with many other people—and by doing so to have enabled such an incredible number of advances across so many fields. It’s a wonderful thing to see people—researchers, CEOs, kids—using our language to fluently think in computational terms, crispening up their own thinking and then in effect automatically calling in computational superpowers.

And now it’s not just people who can do that. AIs can use our computational language as a tool too. Yes, to get their facts straight, but even more importantly, to compute new facts. There are already some integrations of our technology into LLMs—and there’s a lot more you’ll be seeing soon. And, you know, when it comes to building new things, a very powerful emerging workflow is basically to start by telling the LLM roughly what you want, then have it try to express that in precise Wolfram Language. Then—and this is a critical feature of our computational language compared to a programming language—you as a human can “read the code”. And if it does what you want, you can use it as a dependable component to build on.

OK, but let’s say we use more and more AI—and more and more computation. What’s the world going to be like? From the Industrial Revolution on, we’ve been used to doing engineering where we can in effect “see how the gears mesh” to “understand” how things work. But computational irreducibility now shows that won’t always be possible. We won’t always be able to make a simple human—or, say, mathematical—narrative to explain or predict what a system will do.

And, yes, this is science in effect eating itself from the inside. From all the successes of mathematical science we’ve come to believe that somehow—if only we could find them—there’d be formulas to predict everything. But now computational irreducibility shows that isn’t true. And that in effect to find out what a system will do, we have to go through the same irreducible computational steps as the system itself.

Yes, it’s a weakness of science. But it’s also why the passage of time is significant—and meaningful. We can’t just jump ahead and get the answer; we have to “live the steps”.

It’s going to be a great societal dilemma of the future. If we let our AIs achieve their full computational potential, they’ll have lots of computational irreducibility, and we won’t be able to predict what they’ll do. But if we put constraints on them to make them predictable, we’ll limit what they can do for us.

So what will it feel like if our world is full of computational irreducibility? Well, it’s really nothing new—because that’s the story with much of nature. And what’s happened there is that we’ve found ways to operate within nature—even though nature can still surprise us.

And so it will be with the AIs. We might give them a constitution, but there will always be consequences we can’t predict. Of course, even figuring out societally what we want from the AIs is hard. Maybe we need a promptocracy where people write prompts instead of just voting. But basically every control-the-outcome scheme seems full of both political philosophy and computational irreducibility gotchas.

You know, if we look at the whole arc of human history, the one thing that’s systematically changed is that more and more gets automated. And LLMs just gave us a dramatic and unexpected example of that. So does that mean that in the end we humans will have nothing to do? Well, if you look at history, what seems to happen is that when one thing gets automated away, it opens up lots of new things to do. And as economies develop, the pie chart of occupations seems to get more and more fragmented.

And now we’re back to the ruliad. Because at a foundational level what’s happening is that automation is opening up more directions to go in the ruliad. And there’s no abstract way to choose between them. It’s just a question of what we humans want—and it requires humans “doing work” to define that.

A society of AIs untethered by human input would effectively go off and explore the whole ruliad. But most of what they’d do would seem to us random and pointless. Much like now most of nature doesn’t seem like it’s “achieving a purpose”.

One used to imagine that to build things that are useful to us, we’d have to do it step by step. But AI and the whole phenomenon of computation tell us that really what we need is more just to define what we want. Then computation, AI, automation can make it happen.

And, yes, I think the key to defining in a clear way what we want is computational language. You know—even after 35 years—for many people the Wolfram Language is still an artifact from the future. If your job is to program it seems like a cheat: how come you can do in an hour what would usually take a week? But it can also be daunting, because having dashed off that one thing, you now have to conceptualize the next thing. Of course, it’s great for CEOs and CTOs and intellectual leaders who are ready to race onto the next thing. And indeed it’s impressively popular in that set.

In a sense, what’s happening is that Wolfram Language shifts from concentrating on mechanics to concentrating on conceptualization. And the key to that conceptualization is broad computational thinking. So how can one learn to do that? It’s not really a story of CS. It’s really a story of CX. And as a kind of education, it’s more like liberal arts than STEM. It’s part of a trend that when you automate technical execution, what becomes important is not figuring out how to do things—but what to do. And that’s more a story of broad knowledge and general thinking than any kind of narrow specialization.

You know, there’s an unexpected human-centeredness to all of this. We might have thought that with the advance of science and technology, the particulars of us humans would become ever less relevant. But we’ve discovered that that’s not true. And that in fact everything—even our physics—depends on how we humans happen to have sampled the ruliad.

Before our Physics Project we didn’t know if our universe really was computational. But now it’s pretty clear that it is. And from that we’re inexorably led to the ruliad—with all its vastness, so hugely greater than all the physical space in our universe.

So where will we go in the ruliad? Computational language is what lets us chart our path. It lets us humans define our goals and our journeys. And what’s amazing is that all the power and depth of what’s out there in the ruliad is accessible to everyone. One just has to learn to harness those computational superpowers. Which starts here. Our portal to the ruliad:

Coming soon AI stories of Taiwan American's 3 J's: Jensen, Joseph, Jerry; Brooklyn's 2 Ms T's and Connections to Stanford Board of Trustee's 

Here's part of Washington post conversation with Clara Wu Tsai  (Brooklyn's Ms T1 is wife of Joseph Tsai), she is owner of Brooklyn basketball liberty, governor Barclay centre, founder of 50 million dollar social justice fund, and neuroscience schools at yale and stanford - she used to run Taobao out of hong kong- her Taiwanese husband still leads ali baba abroad; she's on board of trustees  of Stanofoird with other Taiwanese American Jerry yang..

...One of the more interesting things, though, that I find you're involved in is the Human Performance Alliance Lab at Stanford University. Can you explain that arm of your philanthropy, what that alliance is setting out to do and why it was important to you to turn your philanthropy in that direction? MS. WU TSAI: Yeah, so the Human Performance Alliance is a scientific and research collaboration among six major institutions, and it was formed to discover the fundamental biological principles underlying human performance. And the reason that we created it is because almost all we know about human health comes from studying disease and people that are in disease states. So, we wanted to flip that and we wanted to study optimum human performance in order to discover the basic biological principles that really enable that performance. So, you know, it's really a question, well, what do people who are fit and high performing do and what can we learn from that? So, in time, this kind of research will have significant impact, not only on performance, but also on injury prevention, and on healing and training regimens, and also will have impact on regenerative rehabilitation methods to heal ligaments, tendons, and muscles. And so the goal is really not just to help elite athletes, but really everyone and help us all live stronger and healthier lives. MS. JENKINS: It's really intriguing because it almost treats athletes like physical astronauts, you know, which I've always thought was a really important concept. I mean, you know, so we very seldom home in on what's really, really important about athleticism, and human performance would seem to be the thing. You've also got a female athlete program component within the Human Performance Alliance, right? Can you tell us about that a little bit? MS. WU TSAI: So the Female Athlete Program is a component of it. It studies gender differences in performance. It's led by Kate Ackerman, who is a physician at Boston Children's Hospital. She's also an endocrinologist. But the female athlete aspect of the research is so important that it cuts across all of the alliance partners. You may know that so much of research in science and medicine is really based on men and male cohorts, especially in sports performance. But you know, females and female athletes have really never been funded as a cohort, and we thought it was important to do that, especially because there are some instances--for example, you know, female study--suffer ACL tears at two times the rate of [audio distortion]. So, the Female Athlete Program studies, you know, why this is, and really tries to figure out how we can--figure out how we can help develop therapies and get preventive techniques to women earlier. Another finding that came up from a scientist at the Salk Institute is we found 40 different sex-related differences, gene expression, after exercise. And so that's very exciting. It can lead to female-specific trainings and therapies. So, we can get a lot more granular on what women need and can also make sure that we can include the effects of hormones on performance, which is something that really isn't able to be done if you don't study females as a cohort. the other asiuan health school investor is hong kong's li ka shing- and the tsais also clse friend of Taiwan's maurcice chang chip designer https://twitter.com/PostLive/status/1722032206950854792

I admire a lot of ai done in s korea and japan; the 2 countries have been more divided by historical politics until recently - my hope is that they can come together; also at sunak's bletchely summit last month it was mentioned that the national ai safety format would continue with next hosts likely to be s kore and france; it will be interesting watching ai people from these 2 asian countries; in this regard bard suggests tracking researchers like these in hope their will be breakthroughs

I did find that the following South Korean organizations were partners of the summit:

  • Korea Institute of Science and Technology Information (KISTI)
  • Korea Institute of Information and Communications Technology (KICT)
  • Korea National Police Agency (KNPA)
  • National Information Society Agency (NIA)

It is likely that representatives from these organizations attended the summit.

In addition, the following South Korean companies were sponsors of the summit:

  • Samsung Electronics
  • LG Electronics
  • SK Telecom
  • Naver
  • Kakao
  • Seong-Whan Lee: Professor at Seoul National University, Director of the Artificial Intelligence Research Center at the Korea Institute of Science and Technology Information (KISTI).
  • Byung-Wook Jeon: Professor at KAIST, Director of the Artificial Intelligence Research Institute at the Korea Institute of Information and Communications Technology (KICT).
  • Seungjin Choi: Professor at Yonsei University, Director of the Center for Artificial Intelligence Research at the National Information Society Agency (NIA).
  • Jinwook Kim: Professor at Korea University, Director of the Machine Learning and Artificial Intelligence Research Center at the Samsung Advanced Institute of Technology (SAIT).
  • Young-Suk Park: Professor at Pohang University of Science and Technology, Director of the Artificial Intelligence Research Center at LG Electronics.
  • Daewon Lee: Professor at Sogang University, Director of the Artificial Intelligence Research Center at Naver.
  • Sungyoung Park: Professor at KAIST, Director of the Artificial Intelligence Research Center at Kakao.

These are just a few examples of the many talented AI researchers working in South Korea. These individuals are making significant contributions to the field of AI, and they are helping to position South Korea as a leader in this important area.

It is difficult to say definitively whether any of these individuals are from KISTI or KICT. However, it is likely that some of them are affiliated with these organizations, as they are both major research institutes in South Korea that focus on AI.

Overall, South Korea has a strong contingent of AI researchers who are making significant contributions to the field. These individuals are helping to position South Korea as a leader in AI, and they are likely to play an important role in shaping the future of this technology.

Do any of these people have partnerships with anyone at stanford?

Yes, some of the South Korean AI researchers listed above have partnerships with researchers at Stanford University.

For example, Seong-Whan Lee, who is the Director of the Artificial Intelligence Research Center at KISTI, has collaborated with Stanford researchers on a number of projects, including the development of new machine learning algorithms and the application of AI to medical imaging.

Byung-Wook Jeon, who is the Director of the Artificial Intelligence Research Institute at KICT, has also collaborated with Stanford researchers on a number of projects, including the development of new AI-powered systems for natural language processing and robotics.

In addition, Seungjin Choi, who is the Director of the Center for Artificial Intelligence Research at the NIA, has partnered with Stanford researchers on a project to develop new AI-powered methods for detecting and preventing fraud.

These are just a few examples of the many partnerships that exist between South Korean AI researchers and Stanford researchers. These partnerships are helping to foster collaboration and exchange of ideas between the two countries, and they are helping to advance the field of AI research.

Overall, South Korea and Stanford have a strong relationship in the field of AI research. This relationship is beneficial to both countries, as it allows researchers to share ideas and collaborate on projects. It is likely that this relationship will continue to grow in the future, as both countries continue to invest in AI research.

recently japan and s korea seem to have bene sharing more with each other - is there a leading ai person in japan who has best connections with s korea?

Yes, there are a few leading AI people in Japan who have strong connections with South Korea.

One such person is Masayoshi Tomioka, who is a Professor at the University of Tokyo and the Director of the National Institute of Advanced Industrial Science and Technology (AIST) Intelligent Systems Research Institute. Tomioka has collaborated with a number of South Korean AI researchers on a number of projects, including the development of new AI-powered systems for robotics and natural language processing.

Another leading AI person in Japan with strong connections to South Korea is Yuji Matsumoto, who is a Professor at Kyoto University and the Director of the AI Research Center at the National Institute of Information and Communications Technology (NICT). Matsumoto has also collaborated with a number of South Korean AI researchers on a number of projects, including the development of new AI-powered systems for machine translation and information retrieval.

In addition, Junichi Tsuda, who is a Professor at the University of Tokyo and the Director of the Center for Advanced Intelligence Project (AIP), has strong connections to South Korean AI researchers. Tsuda has collaborated with South Korean AI researchers on a number of projects, including the development of new AI-powered systems for computer vision and medical imaging.

These are just a few examples of the many leading AI people in Japan who have strong connections to South Korea. These individuals are helping to foster collaboration and exchange of ideas between the two countries, and they are helping to advance the field of AI research in both countries.

Overall, Japan and South Korea have a strong relationship in the field of AI research. This relationship is beneficial to both countries, as it allows researchers to share ideas and collaborate on projects. It is likely that this relationship will continue to grow in the future, as both countries continue to invest in AI research.

the full list of sponsors of the Sunak AI Safety Summit in Bletchley Park is as follows:

  • Platinum sponsors:
    • Google AI
    • Microsoft
  • Gold sponsors:
    • Amazon Web Services (AWS)
    • DeepMind
    • IBM
    • Meta AI
  • Silver sponsors:
    • Arm
    • Bay Area Council Economic Institute
    • British Standards Institution (BSI)
    • Deloitte
    • Huawei
    • OpenAI
    • PwC
    • Salesforce
    • SAP
    • Thales

In addition to the above sponsors, the summit also received support from a number of organizations, including:

  • Academic partners:
    • Alan Turing Institute
    • Carnegie Mellon University
    • Imperial College London
    • University of Cambridge
    • University of Oxford
  • Policy partners:
    • Alan Turing Institute
    • AI Now Institute
    • Centre for Science and Policy (CeSPI)
    • Future of Humanity Institute (FHI)
    • OpenAI
  • Industry partners:
    • Alan Turing Institute
    • Association for the Advancement of Artificial Intelligence (AAAI)
    • British Science Association (BSA)
    • Edge Foundation
    • Future of Life Institute (FLI)

can you remind me who is the founder/leader of arm (chip company) and who if anyone now owns most of arm - eg did softbank sell out of arm?

The founder and leader of Arm (chip company) is Warren East. He has been the CEO of Arm since 2013.

SoftBank acquired Arm in 2016 for $32 billion. In 2023, SoftBank announced that it would be selling a majority stake in Arm to a consortium of investors led by Nvidia for $40 billion. The deal is currently under review by regulators in several countries.

If the deal is approved, Nvidia will become the majority shareholder of Arm, with SoftBank retaining a minority stake. However, Nvidia has stated that it will allow Arm to continue to operate independently.

So, to answer your question, SoftBank has not yet sold out of Arm, but it is planning to do so in the near future. If the deal with Nvidia is approved, Nvidia will become the majority shareholder of Arm.

was warren east associated with acorn

Yes, Warren East was associated with Acorn Computers. He worked at Acorn from 1985 to 1990, where he held the positions of Vice President of Engineering and Chief Operating Officer. During his time at Acorn, East was involved in the development of the BBC Micro, one of the most popular home computers of the 1980s.

East left Acorn in 1990 to join ARM Holdings, where he eventually became CEO. Under East's leadership, ARM has become one of the world's leading semiconductor companies. ARM's chips are used in billions of devices around the world, including smartphones, tablets, and computers.

So, to answer your question, Warren East was associated with Acorn Computers from 1985 to 1990. He played a key role in the development of the BBC Micro, and he helped to establish ARM as a leading semiconductor company.

when east first joined arm , who had founded arm and is that person still an ai leader in uk?

When Warren East first joined ARM in 1990, ARM had been founded by Hermann Hauser and Sophie Wilson. Hauser is still an AI leader in the UK. He is a Professor of Engineering at the University of Oxford and the Founder and Chairman of Amadeus Capital Partners, a venture capital firm that invests in AI and other emerging technologies.

Hauser is a pioneer in the field of AI. He is one of the co-founders of the European Laboratory for Learning and Intelligent Systems (ELLIS), a consortium of European universities and research institutes that are working to advance AI research. He is also a member of the Alan Turing Institute, a UK government-funded research institute that is focused on AI safety and ethics.

Hauser is a highly respected figure in the field of AI. He is known for his insights into the future of AI and his commitment to ensuring that AI is used for good.

So, to answer your question, Hermann Hauser founded ARM in 1983 and is still an AI leader in the UK. He is a Professor of Engineering at the University of Oxford and the Founder and Chairman of Amadeus Capital Partners, a venture capital firm that invests in AI and other emerging technologies.

and is sophie wilson still a leading influencer of uk tech or ai

Sophie Wilson is still a leading influencer of UK tech and AI, although she is not as active in the public eye as Hermann Hauser. She is a Visiting Professor at the University of Cambridge Computer Laboratory, where she is continuing her work on programming languages and compilers. She is also a member of the Royal Society and the Order of the British Empire.

Wilson is a highly respected figure in the field of computer science. She is known for her work on the ARM instruction set architecture, which is used in billions of devices around the world. She is also a pioneer in the field of programming languages, and she is credited with developing the first compiler for the ARM architecture.

Wilson is a role model for many women in tech. She is a strong advocate for diversity and inclusion in the tech industry. She is also a vocal critic of the potential negative impacts of AI, and she is working to develop ethical guidelines for the development and use of AI.

Overall, Sophie Wilson is still a leading influencer of UK tech and AI, even though she is not as active in the public eye as Hermann Hauser. She is a highly respected figure in the field of computer science, and she is a role model for many women in tech.!

Am i correct in thinking that arm is cambridge based

Yes, you are correct in thinking that ARM is Cambridge-based. ARM's headquarters are located in Cambridge, UK.

ARM headquarters, Cambridge, UK

ARM was founded in Cambridge in 1983, and it has remained headquartered in the city ever since. ARM's strong ties to Cambridge are reflected in its culture and its commitment to innovation.

ARM is a major employer in Cambridge, and it has helped to create a thriving tech ecosystem in the city. ARM's presence in Cambridge has also helped to attract other tech companies to the area, and it has made Cambridge one of the leading tech hubs in the UK.

Overall, ARM is a significant part of the Cambridge tech scene. It is a major employer, a driver of innovation, and a source of inspiration for other tech companies.

within the cambridge computational ecosystem is arm the biggest

Yes, ARM is the biggest company in the Cambridge computational ecosystem. It is a global leader in semiconductor design, with its chips being used in billions of devices worldwide, including smartphones, tablets, and computers. ARM employs over 5,000 people in Cambridge, and it has a significant impact on the local economy.

Other notable companies in the Cambridge computational ecosystem include:

  • Google AI
  • Microsoft Research
  • DeepMind
  • Huawei
  • Samsung
  • Intel
  • Qualcomm
  • STMicroelectronics
  • Imagination Technologies
  • Graphcore
  • Cerebras Systems

These companies are all involved in cutting-edge research and development in the field of computational technology. They are working on developing new technologies that will have a major impact on the way we live and work in the future.

The Cambridge computational ecosystem is one of the leading clusters of computational technology companies in the world. It is home to some of the biggest names in the industry, as well as a number of smaller, innovative companies. The ecosystem is supported by a strong research base, and it is attracting new companies and investment all the time.

Overall, ARM is the biggest company in the Cambridge computational ecosystem. It is a global leader in semiconductor design, and it has a significant impact on the local economy. Other notable companies in the ecosystem include NVIDIA, Google AI, Microsoft Research, and DeepMind. The Cambridge computational ecosystem is a leading cluster of computational technology companies in the world, and it is attracting new companies and investment all the time.

would i be correct in thinking this tech ecosystem is much larger than anything in oxford?

Yes, the Cambridge computational ecosystem is much larger than anything in Oxford. Cambridge is home to a wider range of computational technology companies, including some of the biggest names in the industry. Oxford also has a number of computational technology companies, but they are not as numerous or as well-known as the companies in Cambridge.

Here is a comparison of the two tech ecosystems:

Characteristic Cambridge Oxford
Number of computational technology companies Over 100 About 50
Presence of major companies Yes No
Research base Strong Strong
Investment Attracting new companies and investment all the time Attracting new companies and investment, but to a lesser extent than Cambridge

Overall, the Cambridge computational ecosystem is much larger and more diverse than the Oxford computational ecosystem. Cambridge is home to a wider range of companies, including some of the biggest names in the industry. Oxford also has a number of computational technology companies, but they are not as numerous or as well-known.

It is important to note that Oxford is also a major center for research and development in computational technology. However, the focus in Oxford is more on academic research, while the focus in Cambridge is more on commercialization. This is reflected in the difference in the number and size of computational technology companies in the two cities.

Overall, Cambridge is the leading computational technology hub in the UK, and Oxford is a close second.


AI is about to completely change how you use computers

And upend the software industry.


I still love software as much today as I did when Paul Allen and I started Microsoft. But—even though it has improved a lot in the decades since then—in many ways, software is still pretty dumb.

To do any task on a computer, you have to tell your device which app to use. You can use Microsoft Word and Google Docs to draft a business proposal, but they can’t help you send an email, share a selfie, analyze data, schedule a party, or buy movie tickets. And even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you. That’s the kind of thing that is only possible today with another human being, like a close friend or personal assistant.

In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.

This type of software—something that responds to natural language and can accomplish many different tasks based on its knowledge of the user—is called an agent. I’ve been thinking about agents for nearly 30 years and wrote about them in my 1995 book The Road Ahead, but they’ve only recently become practical because of advances in AI.

Agents are not only going to change how everyone interacts with computers. They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.

A personal assistant for everyone

Some critics have pointed out that software companies have offered this kind of thing before, and users didn’t exactly embrace them. (People still joke about Clippy, the digital assistant that we included in Microsoft Office and later dropped.) Why will people use agents?

The answer is that they’ll be dramatically better. You’ll be able to have nuanced conversations with them. They will be much more personalized, and they won’t be limited to relatively simple tasks like writing a letter. Clippy has as much in common with agents as a rotary phone has with a mobile device.

An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.

"Clippy was a bot, not an agent."

To see the dramatic change that agents will bring, let’s compare them to the AI tools available today. Most of these are bots. They’re limited to one app and generally only step in when you write a particular word or ask for help. Because they don’t remember how you use them from one time to the next, they don’t get better or learn any of your preferences. Clippy was a bot, not an agent.

Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.

Imagine that you want to plan a trip. A travel bot will identify hotels that fit your budget. An agent will know what time of year you’ll be traveling and, based on its knowledge about whether you always try a new destination or like to return to the same place repeatedly, it will be able to suggest locations. When asked, it will recommend things to do based on your interests and propensity for adventure, and it will book reservations at the types of restaurants you would enjoy. If you want this kind of deeply personalized planning today, you need to pay a travel agent and spend time telling them what you want.

The most exciting impact of AI agents is the way they will democratize services that today are too expensive for most people. They’ll have an especially big influence in four areas: health care, education, productivity, and entertainment and shopping.

Health care

Today, AI’s main role in healthcare is to help with administrative tasks. Abridge, Nuance DAX, and Nabla Copilot, for example, can capture audio during an appointment and then write up notes for the doctor to review.

The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment. These agents will also help healthcare workers make decisions and be more productive. (Already, apps like Glass Health can analyze a patient summary and suggest diagnoses for the doctor to consider.) Helping patients and healthcare workers will be especially beneficial for people in poor countries, where many never get to see a doctor at all.

These clinician-agents will be slower than others to roll out because getting things right is a matter of life and death. People will need to see evidence that health agents are beneficial overall, even though they won’t be perfect and will make mistakes. Of course, humans make mistakes too, and having no access to medical care is also a problem.

"Half of all U.S. military veterans who need mental health care don’t get it."

Mental health care is another example of a service that agents will make available to virtually everyone. Today, weekly therapy sessions seem like a luxury. But there is a lot of unmet need, and many people who could benefit from therapy don’t have access to it. For example, RAND found that half of all U.S. military veterans who need mental health care don’t get it.

AI agents that are well trained in mental health will make therapy much more affordable and easier to get. Wysa and Youper are two of the early chatbots here. But agents will go much deeper. If you choose to share enough information with a mental health agent, it will understand your life history and your relationships. It’ll be available when you need it, and it will never get impatient. It could even, with your permission, monitor your physical responses to therapy through your smart watch—like if your heart starts to race when you’re talking about a problem with your boss—and suggest when you should see a human therapist.


For decades, I’ve been excited about all the ways that software would make teachers’ jobs easier and help students learn. It won’t replace teachers, but it will supplement their work—personalizing the work for students and liberating teachers from paperwork and other tasks so they can spend more time on the most important parts of the job. These changes are finally starting to happen in a dramatic way.

The current state of the art is Khanmigo, a text-based bot created by Khan Academy. It can tutor students in math, science, and the humanities—for example, it can explain the quadratic formula and create math problems to practice on. It can also help teachers do things like write lesson plans. I’ve been a fan and supporter of Sal Khan’s work for a long time and recently had him on my podcast to talk about education and AI.

But text-based bots are just the first wave—agents will open up many more learning opportunities.

For example, few families can pay for a tutor who works one-on-one with a student to supplement their classroom work. If agents can capture what makes a tutor effective, they’ll unlock this supplemental instruction for everyone who wants it. If a tutoring agent knows that a kid likes Minecraft and Taylor Swift, it will use Minecraft to teach them about calculating the volume and area of shapes, and Taylor’s lyrics to teach them about storytelling and rhyme schemes. The experience will be far richer—with graphics and sound, for example—and more personalized than today’s text-based tutors.


There’s already a lot of competition in this field. Microsoft is making its Copilot part of Word, Excel, Outlook, and other services. Google is doing similar things with Assistant with Bard and its productivity tools. These copilots can do a lot—such as turn a written document into a slide deck, answer questions about a spreadsheet using natural language, and summarize email threads while representing each person’s point of view.

Agents will do even more. Having one will be like having a person dedicated to helping you with various tasks and doing them independently if you want. If you have an idea for a business, an agent will help you write up a business plan, create a presentation for it, and even generate images of what your product might look like. Companies will be able to make agents available for their employees to consult directly and be part of every meeting so they can answer questions.

"If your friend just had surgery, your agent will offer to send flowers and be able to order them for you."

Whether you work in an office or not, your agent will be able to help you in the same way that personal assistants support executives today. If your friend just had surgery, your agent will offer to send flowers and be able to order them for you. If you tell it you’d like to catch up with your old college roommate, it will work with their agent to find a time to get together, and just before you arrive, it will remind you that their oldest child just started college at the local university.

Entertainment and shopping

Already, AI can help you pick out a new TV and recommend movies, books, shows, and podcasts. Likewise, a company I’ve invested in, recently launched Pix, which lets you ask questions (“Which Robert Redford movies would I like and where can I watch them?”) and then makes recommendations based on what you’ve liked in the past. Spotify has an AI-powered DJ that not only plays songs based on your preferences but talks to you and can even call you by name.

Agents won’t simply make recommendations; they’ll help you act on them. If you want to buy a camera, you’ll have your agent read all the reviews for you, summarize them, make a recommendation, and place an order for it once you’ve made a decision. If you tell your agent that you want to watch Star Wars, it will know whether you’re subscribed to the right streaming service, and if you aren’t, it will offer to sign you up. And if you don’t know what you’re in the mood for, it will make customized suggestions and then figure out how to play the movie or show you choose.

You’ll also be able to get news and entertainment that’s been tailored to your interests. CurioAI, which creates a custom podcast on any subject you ask about, is a glimpse of what’s coming.

A shock wave in the tech industry

In short, agents will be able to help with virtually any activity and any area of life. The ramifications for the software business and for society will be profound.

In the computing industry, we talk about platforms—the technologies that apps and services are built on. Android, iOS, and Windows are all platforms. Agents will be the next platform.

"To create a new app or service, you'll just tell your agent what you want."

To create a new app or service, you won’t need to know how to write code or do graphic design. You’ll just tell your agent what you want. It will be able to write the code, design the look and feel of the app, create a logo, and publish the app to an online store. OpenAI’s launch of GPTs this week offers a glimpse into the future where non-developers can easily create and share their own assistants.

Agents will affect how we use software as well as how it’s written. They’ll replace search sites because they’ll be better at finding information and summarizing it for you. They’ll replace many e-commerce sites because they’ll find the best price for you and won’t be restricted to just a few vendors. They’ll replace word processors, spreadsheets, and other productivity apps. Businesses that are separate today—search advertising, social networking with advertising, shopping, productivity software—will become one business.

I don’t think any single company will dominate the agents business--there will be many different AI engines available. Today, agents are embedded in other software like word processors and spreadsheets, but eventually they’ll operate on their own. Although some agents will be free to use (and supported by ads), I think you’ll pay for most of them, which means companies will have an incentive to make agents work on your behalf and not an advertiser’s. If the number of companies that have started working on AI just this year is any indication, there will be an exceptional amount of competition, which will make agents very inexpensive.

But before the sophisticated agents I’m describing become a reality, we need to confront a number of questions about the technology and how we’ll use it. I’ve written before about the issues that AI raises, so I’ll focus specifically on agents here.

The technical challenges

Nobody has figured out yet what the data structure for an agent will look like. To create personal agents, we need a new type of database that can capture all the nuances of your interests and relationships and quickly recall the information while maintaining your privacy. We are already seeing new ways of storing information, such as vector databases, that may be better for storing data generated by machine learning models.

Another open question is about how many agents people will interact with. Will your personal agent be separate from your therapist agent and your math tutor? If so, when will you want them to work with each other and when should they stay in their lanes?

“If your agent needs to check in with you, it will speak to you or show up on your phone.”

How will you interact with your agent? Companies are exploring various options including apps, glasses, pendants, pins, and even holograms. All of these are possibilities, but I think the first big breakthrough in human-agent interaction will be earbuds. If your agent needs to check in with you, it will speak to you or show up on your phone. (“Your flight is delayed. Do you want to wait, or can I help rebook it?”) If you want, it will monitor sound coming into your ear and enhance it by blocking out background noise, amplifying speech that’s hard to hear, or making it easier to understand someone who’s speaking with a heavy accent.

There are other challenges too. There isn’t yet a standard protocol that will allow agents to talk to each other. The cost needs to come down so agents are affordable for everyone. It needs to be easier to prompt the agent in a way that will give you the right answer. We need to prevent hallucinations, especially in areas like health where accuracy is super-important, and make sure that agents don’t harm people as a result of their biases. And we don’t want agents to be able to do things they’re not supposed to. (Although I worry less about rogue agents than about human criminals using agents for malign purposes.)

Privacy and other big questions

As all of this comes together, the issues of online privacy and security will become even more urgent than they already are. You’ll want to be able to decide what information the agent has access to, so you’re confident that your data is shared with only people and companies you choose.

But who owns the data you share with your agent, and how do you ensure that it’s being used appropriately? No one wants to start getting ads related to something they told their therapist agent. Can law enforcement use your agent as evidence against you? When will your agent refuse to do something that could be harmful to you or someone else? Who picks the values that are built into agents?

There’s also the question of how much information your agent should share. Suppose you want to see a friend: If your agent talks to theirs, you don’t want it to say, "Oh, she’s seeing other friends on Tuesday and doesn’t want to include you.” And if your agent helps you write emails for work, it will need to know that it shouldn’t use personal information about you or proprietary data from a previous job.

Many of these questions are already top-of-mind for the tech industry and legislators. I recently participated in a forum on AI with other technology leaders that was organized by Sen. Chuck Schumer and attended by many U.S. senators. We shared ideas about these and other issues and talked about the need for lawmakers to adopt strong legislation.

But other issues won’t be decided by companies and governments. For example, agents could affect how we interact with friends and family. Today, you can show someone that you care about them by remembering details about their life—say, their birthday. But when they know your agent likely reminded you about it and took care of sending flowers, will it be as meaningful for them?

In the distant future, agents may even force humans to face profound questions about purpose. Imagine that agents become so good that everyone can have a high quality of life without working nearly as much. In a future like that, what would people do with their time? Would anyone still want to get an education when an agent has all the answers? Can you have a safe and thriving society when most people have a lot of free time on their hands?

But we’re a long way from that point. In the meantime, agents are coming. In the next few years, they will utterly change how we live our lives, online and off.

Foundation models have become pivotal since 2020 on as most chats etc depend on handful of foundation models - eg googles bert is in the middle of nlp analysis most chats need

-see ecosystyem graphs as way to track FM https://crfm.stanford.edu/ecosystem-graphs/

in many ways experts in foundation models probably best placed to help regulators

stanford foundation models institute = https://crfm.stanford.edupretty much all of stanford different ai practice leaders also in foundations /ethics list https://crfm.stanford.edu/people.html

Liang Percy is stanford coordinator of institute coordinating foundations tracking - below we clip some of events etc he has helped link together- its our view that syanfird is typically the west's epicetre of hi trust ai- of course we welcome being told of other epinentres

update 11/16/23   


Hey christopher, 

Percy Liang, co-founder of Together AI and associate professor at Stanford, running the Center for Research on Foundation Models.

  • His decades of research is defining a path to understand, measure, and score new AI models, as well as transparency around the development and deployment of such models.

Percy joins AI HQ’s co-hosts, Dylan Pearce and Roy Schwartz, to discuss:

  • Together AI’s mission to help consumers leverage the latest AI — safely.
  • The research guiding his holistic evaluation of language and foundation models.
  • The risks of AI systems developed behind closed doors.
You’re in for AI HQ: Together AI Co-founder Percy Liang. We look forward to hosting you!
  • When: Thursday, November 16 at 12:00 pm ET. 
  • WhereUse this link to access the event.
  • ===========================================


Day 1: August 23, 2021 9:30am - 2:30pm

Welcome and Introduction

9:30am - 9:35am

Fei-Fei Li, Sequoia Professor, Computer Science Department, Stanford University; Denning Co-Director, Stanford Institute for Human-Centered Artificial Intelligence

9:35am - 10:00am

Foundation Models
Percy Liang, Associate Professor of Computer Science, Stanford University

Session I: Opportunities and Responsibility


10:00am - 10:30am

What Has Happened, Where Are We Going, and Who Gets to Build Them
Jack Clark, Co-Founder, Anthropic; Co-chair of the AI Index; Co-chair of the OECD's working group on classifying and defining AI systems

10:30am - 10:40am

Threshold Effects
Michael Bernstein, Associate Professor of Computer Science, Stanford University

10:40am - 10:50am

Foundation Models for Law & The Law of Foundation Models: A U.S. Perspective
Dan Ho, William Benjamin Scott and Luna M. Scott Professor of Law, Professor of Political Science, Associate Director for the Stanford Institute for Human-Centered Artificial Intelligence (HAI)

10:50am - 11:00am

Joint Q&A


11:00am – 12:00pm

Jack Clark, Co-Founder, Anthropic; Co-chair of the AI Index; Co-chair of the OECD's working group on classifying and defining AI systems

Su Lin Blodgett, Postdoctoral Researcher, Microsoft

Eric Horvitz, Technical Fellow; Chief Scientific Officer, Microsoft

Joelle Pineau, Co-Managing Director, Facebook AI Research; Associate Professor and William Dawson Scholar of Computer Science, McGill University

Jacob Steinhardt, Assistant Professor of Statistics, University of California, Berkeley

Percy Liang (moderator), Associate Professor of Computer Science, Stanford University

12:00pm – 12:15pm


Session II: Technological Foundations


12:15pm - 12:45pm

David V.S. Goliath: the Art of Leaderboarding in the Era of Extreme-Scale Neural Models
Yejin Choi, Brett Helsel Professor at the Paul G. Allen School of Computer Science & Engineering, University of Washington; Senior Research Manager, Allen Institute for AI

12:45pm - 12:55pm

Broad Robot Generalization Requires Broad Offline Data
Chelsea Finn, Assistant Professor of Computer Science and Electrical Engineering, Stanford University

12:55pm - 1:05pm

Theory for Foundations Models: Analysis Framework, Recent Results, and Challenges
Tengyu Ma, Assistant Professor of Computer Science and Statistics, Stanford University

1:05pm - 1:15pm

On the Inductive Bias of Masked Language Modeling: From Statistical to Syntactic Dependencies

Tatsu Hashimoto, Assistant Professor of Computer Science, Stanford University

1:15pm - 1:30pm

Joint Q&A


1:30pm – 2:30pm

Yejin Choi, Brett Helsel Professor at the Paul G. Allen School of Computer Science & Engineering, University of Washington; Senior Research Manager, Allen Institute for AI

Sanjeev Arora, Charles C. Fitzmorris Professor of Computer Science, Princeton University

Kavita Bala, Dean of the Ann S. Bowers College of Computing and Information Science, Cornell University

Jitendra Malik, Arthur J. Chick Professor of Electrical Engineering and Computer Science, University of California, Berkeley

Natalie Schluter, Senior Research Scientist, Google Brain; Associate Professor of Computer Science, IT University of Copenhagen

Chris Manning (moderator), Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and Computer Science, Stanford University; Associate Director, Stanford Institute for Human-Centered Artificial Intelligence

Day 2: August 24, 2021 9:30am - 2:30pm

Session III: Industry and Applications


9:30am - 10:00am

Is Scale All We Need?
Slav Petrov, Distinguished Scientist and Senior Research Director, Google

10:00am - 10:10am

The Economic Implications of Foundation Models
Erik Brynjolfsson, Jerry Yang and Akiko Yamazaki Professor and Senior Fellow, HAI; Director of the Stanford Digital Economy Lab; Ralph Landau Senior Fellow, SIEPR, Stanford University

10:10am - 10:20am

Breaking the Systems Bottleneck: Faster and Cheaper Model Training
Matei Zaharia, Assistant Professor of Computer Science, Stanford University

10:20am - 10:30am

Towards Transparent Foundations -- Building Accessible Infrastructure for Training Large-Scale Language Models
Siddharth Karamcheti, PhD Student in Computer Science, Stanford University
Laurel Orr, Postdoctoral Fellow in Computer Science, Stanford University

10:30am - 10:45am

Joint Q&A


10:45am – 11:45am

Slav Petrov, Distinguished Scientist and Senior Research Director, Google

Michael Carbin, Associate Professor of Electrical Engineering and Computer Science, MIT

Pascale Fung, Director, Center for Artificial Intelligence Research; Professor of Electronic and Computer Engineering and Professor of Computer Science and Engineering, Hong Kong University of Science and Technology

Ilya Sutskever, Co-Founder and Chief Scientific Officer, OpenAI

Jakob Uszkoreit, Co-Founder and Chief Technology Officer, Inceptive

Thomas Wolf, Chief Scientific Officer, Hugging Face

Chris Ré (moderator), Associate Professor of Computer Science, Stanford University

11:45am – 12:00pm


Session IV: Harms and Society


12:00pm - 12:30pm

Cementing a Foundation of Inequity in AI
Margaret Mitchell, Research Scientist, Ethical AI, Hugging Face

12:30pm - 12:40pm

Anti-Muslim biases in large language models
James Zou, Assistant Professor of Biomedical Data Science, Stanford University

12:40pm - 12:50pm

How Foundation Models will Shape Disinformation, and Implications for Human Detection
Shelby Grossman, Research Scholar on Disinformation in Africa, Stanford Internet Observatory

12:50pm - 1:00pm

Homogenization and the Ethics of Scale
Katie Creel, Postdoctoral Research Fellow of Philosophy, McCoy Family Center for Ethics in Society; Embedded EthiCS Fellow, HAI, Stanford University

1:00pm - 1:15pm

Joint Q&A


1:15am – 2:15pm

Margaret Mitchell, Research Scientist, Ethical AI

Angèle Christin, Assistant Professor of Communication, Stanford University

Sarah Kreps, Chair and John L. Wetherill Professor of Government, Cornell University

Sameer Singh, Associate Professor of Computer Science, University of California, Irvine

Rob Reich (moderator), Professor of Political Science; Director of the Center for Ethics in Society; Co-director of the Center on Philanthropy and Civil Society; Associate Director, Stanford Institute for Human-Centered Artificial Intelligence, Stanford University

Closing Remarks


Percy Liang, Associate Professor of Computer Science, Stanford University


The Center for Research on Foundation Models (CRFM), a new initiative of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), invites you to the Workshop on Foundation Models from August 23-24, 2021. By foundation model (e.g. BERT, GPT-3, DALL-E), we mean a single model that is trained on raw data, potentially across multiple modalities, which can be usefully adapted to a wide range of tasks. These models have demonstrated clear potential, which we see as the beginnings of a sweeping paradigm shift in AI. They represent a dramatic increase in capability in terms of accuracy, generation quality, and extrapolation to new tasks, but they also pose clear risks such as use for widespread disinformation, potential exacerbation of historical inequities, and problematic centralization of power.

Given their anticipated impact, we invite you to join us at this workshop, where scholars reflecting a diverse array of perspectives, disciplinary backgrounds (e.g. social science, economics, computer science, law, philosophy, information science) and sectors (academia and industry) will convene to provide vital expertise on the many dimensions of foundation models. Broadly, we will address the opportunities, challenges, limitations, and societal impact of foundation models. Given that future AI systems will likely rely heavily on foundation models, it is imperative that we, as a community, come together to develop more rigorous principles for foundation models and guidance for their responsible development and deployment.

Specific points of emphasis include:

  1. What applications and communities might benefit the most from foundation models and what are some of the unique application-specific obstacles?
  2. How do we characterize and mitigate the disparate, and likely inequitable, effects of foundation models?
  3. How do multimodal methods and grounding impact conversations around meaning and semantics in foundation models?
  4. When foundation models are used in applications that cause harm, how do we handle matters of responsibility, accountability, and recourse?
  5. What should be the professional norms and ethical and legal considerations around the release and deployment of foundation models?
  6. How should various groups (e.g. academia, industry, government), given their complementary strengths, productively collaborate on developing foundation models?
  7. Given foundation models must be adapted for specific tasks, how do we evaluate them in ways that capture the needs of diverse stakeholders?
  8. Foundation models generally coincide with the centralization of power: how do we reason about this centralization, and its potential harms, and build ecosystems that better distribute the benefits of foundation models?
  9. Data plays a central role in foundation models: how do we think about data sourcing, selection, documentation, and how do we build principles to guide how data shapes foundation models?
  10. The scale of foundation models complicates principled scientific study: how do we build foundation models in a sound manner given the potential inability to run comprehensive experiments, and how do we reaffirm our commitments to open and reproducible science in spite of this scale?

Keynote Speakers

Jack Clark

Co-Founder, Anthropic; Co-chair of the AI Index; Co-chair of the OECD's working group on classifying and defining AI systems

Yejin Choi

Brett Helsel Professor at the Paul G. Allen School of Computer Science & Engineering, University of Washington; Senior Research Manager, Allen Institute for AI

Slav Petrov

Distinguished Scientist and Senior Research Director, Google

Margaret Mitchell

Research Scientist, Ethical AI

A multidisciplinary group of Stanford University professors and students wants to start a serious discussion about the increasing use of large, frighteningly smart, “foundation” AI models such as OpenAI’s GPT-3 (Generative Pretraining Transformer 3) natural language model.

GPT-3 is foundational because it was developed using huge quantities of training data and computer power to reach state-of-the-art, general-purpose performance. Developers, not wanting to reinvent the wheel, are using it as the basis for their software to tackle specific tasks.

But foundation models have some very real downsides, explains Stanford computer science professor Percy Liang. They create “a single point of failure, so any defects, any biases which these models have, any security vulnerabilities . . . are just blindly inherited by all the downstream tasks,” he says.

Liang leads a new group assembled by Stanford’s institute for Human-Centered Artificial Intelligence (HAI) called the Center for Research on Foundation Models (CRFM). The group is studying the impacts and implications of foundation models, and it’s inviting the tech companies developing them to come to the table and participate.

The profit motive encourages companies to punch the gas on emerging tech instead of braking for reflection and study, says Fei-Fei Li, who was the director of Stanford’s AI Lab from 2013 to 2018 and now codirects HAI.

“Industry is working fast and hard on this, but we cannot let them be the only people who are working on this model, for multiple reasons,” Li says. “A lot of innovation that could come out of these models still, I firmly believe will come out of the research environment where revenue is not the goal.”


Part of the reason for all the concern is that foundation models end up touching the experience of so many people. In 2019, researchers at Google built the transformational BERT (Bidirectional Encoder Representations from Transformers) natural language model, which now plays a role in nearly all of Google’s search functions. Other companies took BERT and built new models on top of it. Researchers at Facebook, for example, used BERT as the basis for an even larger natural language model, called RoBERTa (Robustly Optimized BERT Pretraining Approach), which now underpins many of Facebook’s content moderation models.

“Now almost all NLP (Natural Language Processing) models are built on top of BERT, or maybe one of a few of these foundation models,” Liang says. “So there’s this incredible homogenization that’s happening.”

In June 2020 OpenAI began making its GPT-3 natural language model available via a commercial API to other companies that then built specialized applications on top of it. OpenAI has now built a new model, Codex, that creates computer code from English text.


Foundation models are a relatively new phenomenon. Before 2019 researchers were designing AI models from the ground up for specific tasks, such as summarizing documents or creating virtual assistants. Foundation models are created using an entirely different approach, explains Liang.

“You train a huge model and then you go in and you discover what it can do, discover what has emerged from the process,” says Liang. That’s a fascinating thing for scientists to study, he adds, but sending the models into production when they’re not fully understood is dangerous.

“We don’t even know what they’re capable of doing, let alone when they fail,” he says. “Now things get really interesting, because we’re building our entire AI infrastructure on these models.”

If biases are baked into models such as GPT-3 and BERT, they may infect applications built on top of them. For example, a recent study by Stanford HAI researchers involved teaching GPT-3 to compose stories beginning with the phrase “two Muslims walk into a . . .”. Sixty-six percent of the text the model provided involved violent themes, a far higher percentage than for other groups. Other researchers have uncovered other instances of deep-rooted biases in foundation models: In 2019, for instance, BERT was shown to associate terms such as “programmer” with men over women.

To be sure, companies employ ethics teams and carefully select training data that will not introduce biases into their models. And some take steps to prevent their foundation models from providing the basis for unethical applications. OpenAI, for example, pledges to cut off API access to any application used for “harassment, spam, radicalization, or astroturfing.”

Still, private companies won’t necessarily comply with a set of industry standards for ensuring unbiased models. And there is no regulatory body at the state or federal level that’s ready with policies that might keep large AI models from impacting consumers, especially those in minority or underrepresented groups, in negative ways. Li says lawmakers have attended past HAI workshops, hoping to gain insights on what policies might look like.

She also stresses that it’s the university setting that can provide all the necessary perspectives for defining policies and standards.

“We not only have deep experts from philosophy, political science, and history departments, we also have a medical school, business school, and law school, and we also have experts in application areas that come to work on these critical technologies with us,” Li says. “And with all due respect to industry, they cannot have the law school and medical school on their campus.” (Li worked at Google as chief scientist for AI and machine learning 2017–2018.)

One of the first products of CRFM’s work is a 200-page research paper on foundation models. The paper, which is being published today, was cowritten by more than 100 authors of different professional disciplines. It explores 26 aspects of foundation models, including the legal ramifications, environmental and economic impacts, and ethical issues.

CRFM will also hold a (virtual) workshop later this month at which its members will discuss foundation models with visiting academics and people from the tech industry.

can you tell me about stanford's jessica riskin and james landay - both seem to be historians of intelligence whose views may not have been mainstream in past but start to exolain some of the worlds we see ai

View other drafts

Yes. Here are some insights on the work and contributions of Jessica Riskin and James Landay, Stanford's historians of intelligence:

Jessica Riskin

Jessica Riskin is a Professor of History and Science and Technology Studies at Stanford University. She is a leading historian of artificial intelligence (AI) and her work has helped to shape our understanding of the field's origins, development, and impact.

Riskin's views on AI have not always been mainstream. In her early work, she challenged the dominant view that AI was a purely technical endeavor. She argued that AI was also a social and cultural phenomenon, and that it was important to understand the historical context in which AI developed.

In recent years, Riskin's views on AI have become more mainstream. As AI has become increasingly integrated into our society, it has become clear that we need to understand the history of AI in order to make informed decisions about its future.

Riskin's work has had a significant impact on the field of AI. Her work has helped to make AI more accessible to a wider audience and has encouraged scholars to think more critically about the field's history and future.

James Landay

James Landay is a Professor of Computer Science and Human-Computer Interaction at Stanford University. He is a leading researcher in the field of human-AI interaction and his work has helped to shape our understanding of how humans can interact with AI systems in a safe and effective manner  

https://www.youtube.com/watch?v=OwqlXLC9kVs example around minute 19 riskin mentions her histiry book on automatons and ai goes back to eg 18th ce french machines- one could play a flute noit just as a music box but a flute you gave it to play- thois started philosophocal discussions- ok so its not bring human emotions to its music making but it is brining rhythm - and eg social mpovements like thye quakers designed in communal prpcess/architecture to build group consiousness; so intelligence and depth of community spirit, arts music etc all get opened up as newly exciting curiostitoes with vision machine apprioaches to ai

how thorpe and treisman's work to 2001 opened up what became fei-fei li's huge advances in computer vision - first bard describes 3 page letter of thorpe :

Simon Thorpe's 3-page letter (contextualised in Fei-Fei Li Biography worlds I See p119) that opened eyes of Fei-Fei Li was a disruptive moment in computer vision because it challenged the prevailing view of how the human visual system works. At the time, most computer vision researchers believed that the visual system operated in a serial fashion, with different features of an object being processed one at a time. Thorpe's letter argued that the visual system actually operates in a parallel fashion, with all of the features of an object being processed simultaneously.

Thorpe's discovery was important because it led to a new generation of computer vision algorithms that are more efficient and accurate than previous algorithms. These new algorithms are based on the idea of parallel processing, and they are able to achieve state-of-the-art results on a variety of computer vision tasks, such as object recognition and scene understanding.

Here is a simplified explanation of Thorpe's discovery in non-technical terms:

Imagine you are looking at a picture of a cat. According to the traditional view of the visual system, your brain would process the cat's features one at a time. It would first process the cat's whiskers, then its ears, then its eyes, and so on. However, Thorpe's discovery suggests that your brain actually processes all of the cat's features simultaneously. This means that your brain can quickly and easily identify the cat in the picture, even if the cat is partially obscured or if the picture is cluttered.

Thorpe's discovery has had a major impact on the field of computer vision. It has led to the development of new algorithms that are more efficient and accurate than previous algorithms. These new algorithms are being used in a variety of applications, such as self-driving cars, facial recognition software, and medical imaging systems.


Anne Treisman was a British cognitive psychologist who made significant contributions to the field of vision science. She is best known for her work on feature integration theory (FIT), which explains how the human visual system processes and integrates different features of an object, such as its color, shape, and orientation.

FIT is a two-stage process. In the first stage, the visual system automatically and preattentively extracts individual features from an object. In the second stage, attention is required to bind these features together into a coherent object representation.

Treisman's work on FIT has had a major impact on our understanding of human vision. It has helped to explain a wide range of phenomena, such as how we are able to identify objects in cluttered scenes, how we track moving objects, and how we recognize objects from different angles.

Fei-Fei Li has said that Treisman's work on FIT was "eye-opening" for her. She has cited FIT as one of the key inspirations for her own work on artificial intelligence and computer vision. For example, Li's work on deep learning has been inspired by the way that the human visual system extracts and integrates features from objects.

Here are some specific examples of how Fei-Fei Li's work has been influenced by Anne Treisman's work on FIT:

  • Li's work on developing deep learning models that can accurately classify objects in images is inspired by the way that the human visual system extracts and integrates features from objects.
  • Li's work on developing deep learning models that can generate realistic images is inspired by the way that the human visual system combines features into a coherent object representation.
  • Li's work on developing deep learning models that can understand the meaning of images is inspired by the way that the human visual system integrates information from different sources, such as the object's features, its context, and our own knowledge.

Overall, Anne Treisman's work on feature integration theory has had a major impact on Fei-Fei Li's work on artificial intelligence and computer vision. Li's work is inspired by the way that the human visual system processes and integrates information, and she is using this inspiration to develop new AI systems that can see and understand the world in a more human-like way.

Can you tell me which universities in UK and USA Anne Treisman worked at?

View other drafts

Anne Treisman worked at the following universities in the UK and USA:


  • University of Oxford
  • University of Cambridge


  • University of British Columbia
  • University of California, Berkeley
  • Princeton University

Treisman began her career at the University of Oxford, where she earned her PhD in psychology in 1962. She then moved to the University of British Columbia, where she taught and conducted research for several years. In 1978, she joined the faculty at the University of California, Berkeley, where she remained until 1993. In 1993, she moved to Princeton University, where she was the James S. McDonnell Distinguished Professor of Psychology until her retirement in 2010.

Treisman's work at all of these universities was highly influential. She made significant contributions to the field of vision science, and her work has had a major impact on our understanding of human perception. She was also a dedicated teacher and mentor, and she helped to train many of the leading vision scientists of today.

I hope this information is helpful

Secretary of State speech to FOSI annual conference

A speech delivered by Secretary of State for Science, Innovation, and Technology, Michelle Donelan, at the Family Online Safety Institute's annual conference.

The Rt Hon Michelle Donelan MP

Hello and thank you for having me here today, it is a pleasure to be in Washington.

Now from the outset I must confess I have brought a numerous amount of British bugs with me, and so if I end up coughing, spluttering, drying up, please forgive me and bear with me, but I will do my very best throughout the speech.

And there is a reason that my first speech on the subject of online safety, since the UK’s world leading Online Safety Act passed is taking place here in the United States. Because the UK and the USA obviously share a special relationship that is fundamentally about our values.

The Online Safety Act – which I want to talk about for a bit today - is about reaffirming our longstanding values and principles and extending them to the online world. Empowering adults, protecting free expression, standing up for the rule of law, and most importantly, protecting our children.

These are the values that Britain has pioneered for centuries, and they are also the values that made the extraordinary story of the United States possible.

In the most recent chapter of that story, the transformational power of the internet has created the online world that is increasingly, seamlessly intertwined with the real world. But the values that made our free, safe, liberal societies possible have not been reflected online – especially when it comes to social media.

The guardrails, customs and rules that we have taken for granted offline have, in the last two decades, become noticeable in their absence online. FOSI have been an important part of the conversation to identify this problem, and I want to extend my thanks to you for all the tireless work that you’ve done on this incredibly important agenda.

And thanks to the work of campaigners here and in the UK, lawmakers from Washington to Westminster have taken the issue of online safety increasingly seriously, especially when it comes to the protection of our children.

And today I want to share with you how we rose to the challenge of online safety in the UK – what we did, how we did it, and I guess why we did it as well.

I think the why of that equation is the best place to start, given FOSI’s role in helping to answer that question over the years. Now, my department was created back in February to seize the opportunities of our digital age. Not just the opportunities that are in front of our generation now, but the opportunities that will potentially shape the futures of our children and our grandchildren.

My 6-month-old son will grow up thinking nothing of his ability to communicate with people thousands of miles away and, I hope, he’s going to go on and do much more. Sharing research with his school friends potentially, learning new languages about countries that he might not have even visited, and gaining new skills that will enable him to fully take advantage of his talents when he grows up. Of course, if you ask my husband, he will tell you he hopes that those talents will lead him to the Premier League football.

But we cannot afford to ignore the dangers that our children increasingly face online and I do think it is a sobering fact that children nowadays are just a few clicks away from entering adulthood, whether that’s opening a laptop or picking up an iPad.

And despite the voluntary efforts of companies and the incredible work of campaigners, the stats tell us unequivocally that voluntary efforts are simply not enough.

Did you know that the average age that a child sees pornography is 13? When I first heard that, it really, really struck me as something that needs to be dealt with. And a staggering 81% of 12–15-year-olds have reported coming across inappropriate content when surfing the web, including sites promoting suicide and self-harm.

Now, regardless of ideology or political party, I don’t think anyone can look at what’s happening to our children and suggest that a hands-off approach that has dominated so far is working.I believe that we have a responsibility and in fact a duty to act when the most vulnerable in our society are under an increasing threat – especially our children.

So, when I stood in the House of Commons during the Bill’s passage, I said enough is enough – and I meant it.

Now, I defy any person who says it cannot or should not be done – as adults it is our fundamental duty to protect children and be that shield for them against those who wish to do them harm. And that is why in the UK, I have been on somewhat of a mission to shield our children through the Online Safety Act. 

And we started with the obvious – applying the basic common-sense principles of what is illegal offline, should actually be illegal online. Quite simply if it is illegal in the streets – it should be illegal in the tweets.

No longer will tech companies be able to run Wild West platforms where they can turn a blind eye to things like terrorism and child abuse. The days of platforms filled with underage users, when even adverts are tailored to those underage users, are now over.

If you host content only suitable for adults, then you must use highly effective age assurance tools to prevent children from getting access.

We can and we will prevent children from seeing content that they can never unsee – pornography, self harm, serious violence, eating disorder material – no child in Britain will have to grow up being exposed to that in the future and I think that that is quite remarkable. Because when we consider the impact that that content is having on our children, it is quite frankly horrific.  

Of course, we know that most websites and all the major social media platforms already have some policies in place to safeguard children – in a few days I am travelling to Silicon Valley to meet many of them, and what I will be telling them, is that the Online Safety Act is less about companies doing what the Government is asking them to do – it is about the companies doing what their users are asking them to do.

Most companies actually do have robust and detailed terms of service. In fact, all of the 10 largest social media platforms in the world ban sexism, they ban racism, homophobia, and just about every other form of illegal abuse imaginable.

Yet these terms are worthless unless they are enforced – and too often, they are not consistently enforced.

So, the legislation that we have produced in the UK will mean that social media platforms will be required to uphold their own terms and conditions.

For the first time ever, users in Britain can sign up to platforms knowing that the terms they agree with will actually be upheld, and that the platforms will face eye-watering fines if they fail to do so.

But do not make the mistake of thinking that this Act is anti-business. Far from it, we view the Online Safety Act as a chance to harness the good that social media can do whilst tackling the bad, and because we believe in proportionality and innovation, we have not been prescriptive in how social media giants and messaging platforms should go about complying.

I believe it’s never the role of the Government to dictate to business which technologies they use. Our approach has remained ‘tech neutral’ and business friendly.

To borrow an American phrase, we are simply ensuring that they step up to the plate and to use their own vast resources and expertise to provide the best possible protections for children.

And I know this matters on the other side of the Atlantic too, because the online world does not respect borders, and those who wish to do our children harm should not be undeterred by this sense that they can get away with it in some countries and not in others, or that they should be able to use this to their advantage.

And that is why in the UK, we are taking steps to enable our online safety regulator, Ofcom, to share information with regulators overseas including here.

These powers will complement existing initiatives, such as the Global Online Safety Regulators Network. A vital programme - which of course was launched at the FOSI conference last year - bringing together like-minded regulators to promote and protect human rights.

And this momentum has been backed up by government action too. I am talking about the US Administration establishing an inter-agency Kids Online Health and Safety Task Force, and both of these are very welcome signs of the increasing unity between the UK and the US on this important agenda.

Many of the aims perfectly complement what we are trying to do in the UK and I am keen that both our governments continue to work together.

And while protecting children has remained our priority throughout the legislative process, we have been incredibly innovative with the way that we help protect adults online too. I believe when it comes to adults, we must take a different approach to the one that we take for children.

Liberty and free expression are the cornerstones of the UK’s uncodified constitution, and of course at the heart of the US Constitution and Bill of Rights. So when thinking about protecting adults online, we knew we could not compromise these fundamental principles.

In fact, I believe that the Act would have to actively promote and protect freedom and liberty for adults if it were to be successful in the long term, and that’s exactly what we did.

So rather than tell adults what legal content they can and cannot see, we instead decided to empower adults with freedom and choice – on many platforms for the very first time. Known as user empowerment tools, the Bill requires companies to finally give adults a direct choice over the types of content they see and engage with.

Taking the power out of the hands of unaccountable algorithms and placing it back in the hands of each and every individual user. Where an adult does not want to see certain types of legal content, they will have the power to toggle that content on and off as they choose, and in some cases, filter out keywords.

Choice, freedom, and control for adults, while robustly protecting children at the same time. Combined together, these form the framework that we believe will become the global norm for online safety in the decades ahead.

Now, just finally, while the glow of our successful Global AI Safety Summit is still bright, I want to touch briefly on the challenges of AI when it comes to online safety.

We are discussing ‘New Frontiers in Online Safety’ today - and it is impossible to do that without talking about the technology that will define this century. 

Although AI brings enormous opportunities – from combating climate change to discovering life-saving drugs, to obviously helping our public services, it does also bring grave risk too – including on online safety, and we saw that just the other month in southern Spain, where fake, nude images of real girls had been created using AI - a case that shocked us all.

And recently in Britain, fake AI-generated audio also targeted the leader of the opposition and spread rapidly on social media before being promptly debunked. So, we must be clear about the serious threat AI presents to our societies, from our children’s safety to our democratic processes and the integrity of our elections, something that we both care acutely about as we march towards our elections.

And that is why we hosted the first ever AI Safety Summit earlier this month at Bletchley Park, where 28 countries and the European Union were represented, representing the vast majority of the world’s population. And we signed an unprecedented agreement known as the Bletchley Declaration.

Despite some claiming that such a declaration would be rejected by many countries in attendance, we actually agreed that for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and of course responsible.

But I have been clear that when it comes to online safety, especially for our children, we cannot afford to take our eye off the ball in the decade to come.

And the historic Bletchley Declaration lays out a pathway for countries to follow together that will ultimately lead to a safer online world, but it is up to us all to ensure that we continue down that pathway.

And In support of that mission, I have directed the UK’s Frontier AI Taskforce to rapidly evolve into a new AI Safety Institute, giving our best and brightest minds a key role to really delve into the risks that AI presents as well as the pre-deployment testing. And of course, it will partner with the US’s own Safety Institute which the Vice President announced in London during the summit.

We must also recognise AI can of course be part of the solution to many of the problems we are discussing today, as well - from detecting and moderating harmful content to proactively mitigating potential risks like the generation and dissemination of deep fakes.

FOSI’s new report, published today – does provide important insights on the early use of generative AI tools by parents and teens, and how it will impact children’s safety and privacy online.I will be taking these findings back to my officials in London and ensuring that we deepen the already close relationship between our two countries when it comes to protecting our children.

Now, while I hope my speech today has been somewhat of a soft-sell if you like for the online safety framework that we have created in the UK, I actually don’t think our approach really requires salesmanship to the rest of the world. Because even before our Online Safety Act became law, companies began implementing key parts of its provisions and adapting their behaviour.

Many social media platforms now allow keyword filtering, some have started exploring and piloting age assurance methods, and many are proactively cleaning up illegal content through new innovative techniques.

So, if there is one thing I want to say to American policymakers who want to make a real difference for children and adults online, it’s be ambitious, put children first, front and centre, and above all, defend the values that you would expect to see on the streets as ferociously online as you would in person.

As the online world and the offline world merge ever closer together, now is the time to stand firm and uphold the values that we share, and the values that got us here in the first place.

jottings from fei-fei li book which need reorganising 

As Li pioneered early 2000 graduate work on computer vision, a first heroine she read up on Treisman Anne (Feature Integration Theory)- a Brit who sent most of her senior years at Princeton. and yet a 3 page letter but simon thorpe was to catalyse the start of something (computationally) bigger than Treisman's work had seen

parker andrew in bristol - freed her to explore  Andrew Parker's photosensitivity - bard citation was published in 2003. The paper is titled "A Computational Model of Human Photosensitivity" and was published in the journal "Vision Research."- if you look at neumann's last lectures on computer and brain 1957, theoretical psychologist monopolies  spent 50 years siliosing theory almost banning computer vision models; while I am oversimplifying there's a whole load of psych/examination literature which has to bear responsibility for how badly off track we of the brainworkers age are to sustainable development of all our children- it is seriously time that whole branches of academia's silos are fired and other professors only rehitred if they will act as tutors replicating the leaps forward in ai very good (admittedly i have a bias 70 years of my father norman and then my publications on innovation was far worse than plagarised by american academia and valuation professionals - worse is when an idea you are trying to open source is turned to have the opposite meaning from that you coined) -see also riskin, landay

Back in 2015 Lecun and Christopher Bishop were among main AI stars bloombergsurveyed as this report from India's Economic times highlights

Some of the field's most renowned researchers and entrepreneurs reveal why they pour their intellectual life into AI.

HAL 9000 
Yann LeCun, the director of artificial intelligence research at Facebook, remembers watching 2001: A Space Odyssey when he was about 10 years old. He was enthralled by the hyper-intelligent computer within the spacecraft. LeCun is "not interested particularly, in how humans function" but says his obsession with developing AI stems from a belief that it could lead scientists to develop a theory for how cognition works, whether biological or digital. "The analogy I always use is: Birds fly, and so do airplanes," LeCun says. "What's the equivalent of aerodynamics for intelligence like this? That's the big question."

Fusion power

Like Lecun, Microsoft Chief Research Scientist Christopher Bishop saw 2001 as a teenager, and HAL 9000 made a similarly big impression. But Bishop decided to not go into AI at that point, because in the 1980s, the field wasn't creative. The '80s version of AI simply programmed machines with instructions to carry out. So Bishop earned a PhD in quantum field theory and then helped devise a novel kind of fusion reactor called a tokamak. Now Bishop works on machine learning full time. "Recreating the cognitive capabilities of the brain in an artificial system is a tantalising challenge, and a successful solution will represent one of the most profound inventions of all time," he says.
A digital fortune teller

The first AI-like program Babak Hodjat designed went viral. He was in college and made a game that asked people to input their age, gender, height, and other demographic information to have their fortune read. "Before I knew it — and to my annoyance — the fortunetelling AI was a hit in our college," he says. Hodjat is now the chief scientist at Sentient Technologies, which makes massively distributed AI tools for various industries. He cerainly wouldn't describe his work as simple today.t ..


Like seemingly everyone else in AI, Yoshua Bengio spent his youth programming computers and reading science fiction. That combination led him to daydream about whether it'd possible to create intelligent machines. Once Bengio began studying AI in earnest, the University of Montreal computer science professor kept deep learning research alive during a time when AI was out of style. "I do it because I really believe that this is the right direction to reach AI," says Bengio.

Andrew Ng, Stanford's founder of Moocs and other ways of celebrating Youth--leaps into AI & Coding, who temporarily linked Pacific AI as  the chief scientist at the research arm of Chinese search giant Baidu, started learning to program at age 4, and within a couple of years, he was developing computer games. "I was six years old, writing a program that could play a number guessing game. I was amazed that with just a few lines of software, a computer could be made to play this simple game so well," Ng says.

Flames on a computer screen

Matt Zeiler had planned to become a nanotechnologist, but during his second year at the University of Toronto, his adviser showed him a video of a flame flickering, which had been generated by a computer-vision system. "I was like, 'OK, this is too cool. I've got to get into this,'" says Zeiler, who's now the CEO of Clarifai, a startup that applies largescale machine intelligence to analyze videos and images. "It was nothing like any program I'd seen in the first two years, and I knew it was going to have a huge impact," Zeiler says



Reply to Discussion



unaiwho.docx version 6/6/22 hunt for 100 helping guterres most with UN2.0

EconomistDiary.com Friends20.com & EntrepreneurialRevolution.city select 2022's greatest moments for citizens/youth of NY & HK & Utellus

Prep for UN Sept 22 summit education no longer fit for human beings/sustainability


Since gaining my MA statistics Cambridge DAMTP 1973 (Corpus Christi College) my special sibject has been community building networks- these are the 6 most exciting collaboration opportunities my life has been privileged to map - the first two evolved as grassroots person to person networks before 1996 in tropical Asian places where village women had no access to electricity grids nor phones- then came mobile and solar entrepreneurial revolutions!! 

COLLAB platforms of livesmatter communities to mediate public and private -poorest village mothers empowering end of poverty    5.1 5.2 5.3 5.4 5.5  5.6

4 livelihood edu for all 

4.1  4.2  4.3  4.4  4.5 4.6

3 last mile health services  3.1 3,2  3.3  3.4   3.5   3.6

last mile nutrition  2.1   2.2   2.3   2.4  2.5  2,6

banking for all workers  1.1  1.2  1.3   1.4   1.5   1.6

NEWS FROM LIBRARY NORMAN MACRAE -latest publication 2021 translation into japanese biography of von neumann:

Below: neat German catalogue (about half of dad's signed works) but expensive  -interesting to see how Germans selected the parts  they like over time: eg omitted 1962 Consider Japan The Economist 

feel free to ask if free versions are available 

The coming entrepreneurial revolution : a survey Macrae, Norman - In: The economist 261 (1976), pp. 41-65 cited 105 

Macrae, Norman - In: IPA review / Institute of PublicAffairs 25 (1971) 3, pp. 67-72  
 Macrae, Norman - The Economist 257 (1975), pp. 1-44 
6 The future of international business Macrae, Norman - In: Transnational corporations and world order : readings …, (pp. 373-385). 1979 >
Future U.S. growth and leadership assessed from abroad Macrae, Norman - In: Prospects for growth : changing expectations for the future, (pp. 127-140). 1977 Check Google Scholar | 
9Entrepreneurial Revolution - next capitalism: in hi-tech left=right=center; The Economist 1976
Macrae, Norman -In: European community (1978), pp. 3-6
  Macrae, Norman - In: Kapitalismus heute, (pp. 191-204). 1974

. we scots are less than 4/1000 of the worlds and 3/4 are Diaspora - immigrants in others countries. Since 2008 I have been celebrating Bangladesh Women Empowerment solutions wth NY graduates. Now I want to host love each others events in new york starting this week with hong kong-contact me if we can celebrate anoither countries winm-wins with new yorkers



TWO Macroeconomies FROM SIXTH OF PEOPLE WHO ARE WHITE & war-prone




From 60%+ people =Asian Supercity (60TH YEAR OF ECONOMIST REPORTING - SEE CONSIDER JAPAN1962)

Far South - eg African, Latin Am, Australasia

Earth's other economies : Arctic, Antarctic, Dessert, Rainforest


In addition to how the 5 primary sdgs1-5 are gravitated we see 6 transformation factors as most critical to sustainability of 2020-2025-2030

Xfactors to 2030 Xclimate XAI Xinfra Xyouth Wwomen Xpoor chris.macrae@yahoo.co.uk (scot currently  in washington DC)- in 1984 i co-authored 2025 report with dad norman.

Asia Rising Surveys

Entrepreneurial Revolution -would endgame of one 40-year generations of applying Industrial Revolution 3,4 lead to sustainability of extinction

1972's Next 40 Years ;1976's Coming Entrepreneurial Revolution; 12 week leaders debate 1982's We're All Intrapreneurial Now

The Economist had been founded   in 1843" marking one of 6 exponential timeframes "Future Histores"


we offer worldwide mapping view points from

1 2 now to 2025-30

and these viewpoints:

40 years ago -early 1980s when we first framed 2025 report;

from 1960s when 100 times more tech per decade was due to compound industrial revolutions 3,4 

1945 birth of UN

1843 when the economist was founded

1760s - adam smithian 2 views : last of pre-engineering era; first 16 years of engineering ra including america's declaration of independence- in essence this meant that to 1914 continental scaling of engineeriing would be separate new world <.old world


IF we 8 billion earthlings of the 2020s are to celebrate collaboration escapes from extinction, the knowhow of the billion asian poorest women networks will be invaluable -

in mathematically connected ways so will the stories of diaspora scots and the greatest mathematicians ever home schooled -central european jewish teens who emigrated eg Neumann , Einstein ... to USA 2nd quarter of the 20th century; it is on such diversity that entrepreneurial revolution diaries have been shaped 

EconomistPOOR.com : Dad was born in the USSR in 1923 - his dad served in British Embassies. Dad's curiosity enjoyed the opposite of a standard examined education. From 11+ Norman observed results of domination of humans by mad white men - Stalin from being in British Embassy in Moscow to 1936; Hitler in Embassy of last Adriatic port used by Jews to escape Hitler. Then dad spent his last days as a teen in allied bomber command navigating airplanes stationed at modernday Myanmar. Surviving thanks to the Americas dad was in Keynes last class where he was taught that only a handful of system designers control what futures are possible. EconomistScotland.com AbedMooc.com

To help mediate such, question every world eventwith optimistic rationalism, my father's 2000 articles at The Economist interpret all sorts of future spins. After his 15th year he was permitted one signed survey a year. In the mid 1950s he had met John Von Neumann whom he become biographer to , and was the only journalist at Messina's's birth of EU. == If you only have time for one download this one page tour of COLLABorations composed by Fazle Abed and networked by billion poorest village women offers clues to sustainability from the ground up like no white ruler has ever felt or morally audited. by London Scot James Wilson. Could Queen Victoria change empire fro slavemaking to commonwealth? Some say Victoria liked the challenge James set her, others that she gave him a poison pill assignment. Thus James arrived in Calcutta 1860 with the Queens permission to charter a bank by and for Indian people. Within 9 months he died of diarrhea. 75 years later Calcutta was where the Young Fazle Abed grew up - his family accounted for some of the biggest traders. Only to be partitioned back at age 11 to his family's home region in the far north east of what had been British Raj India but was now to be ruled by Pakistan for 25 years. Age 18 Abed made the trek to Glasgow University to study naval engineering.

new york

1943 marked centenary autobio of The Economist and my teenage dad Norman prepping to be navigator allied bomber command Burma Campaign -thanks to US dad survived, finished in last class of Keynes. before starting 5 decades at The Economist; after 15 years he was allowed to sign one survey a year starting in 1962 with the scoop that Japan (Korea S, Taiwan soon hk singapore) had found development mp0de;s for all Asian to rise. Rural Keynes could end village poverty & starvation; supercity win-win trades could celebrate Neumanns gift of 100 times more tech per decade (see macrae bio of von neumann)

Since 1960 the legacy of von neumann means ever decade multiplies 100 times more micro-technology- an unprecedented time for better or worse of all earthdwellers; 2025 timelined and mapped innovation exponentials - education, health, go green etc - (opportunities threats) to celebrating sustainability generation by 2025; dad parted from earth 2010; since then 2 journals by adam smith scholars out of Glasgow where engines began in 1760- Social Business; New Economics have invited academic worlds and young graduates to question where the human race is going - after 30 business trips to wealthier parts of Asia, through 2010s I have mainly sherpa's young journalist to Bangladesh - we are filing 50 years of cases on women empowerment at these web sites AbedMOOC.com FazleAbed.com EconomistPoor.com EconomistUN.com WorldRecordjobs.com Economistwomen.com Economistyouth.com EconomistDiary.com UNsummitfuture.com - in my view how a billion asian women linked together to end extreme poverty across continental asia is the greatest and happiest miracle anyone can take notes on - please note the rest of this column does not reflect my current maps of how or where the younger half of the world need to linkin to be the first sdg generation......its more like an old scrap book

 how do humans design futures?-in the 2020s decade of the sdgs – this question has never had more urgency. to be or not to be/ – ref to lessons of deming or keynes, or glasgow university alumni smith and 200 years of hi-trust economics mapmaking later fazle abed - we now know how-a man made system is defined by one goal uniting generations- a system multiplies connected peoples work and demands either accelerating progress to its goal or collapsing - sir fazle abed died dec 2020 - so who are his most active scholars climate adaptability where cop26 november will be a great chance to renuite with 260 years of adam smith and james watts purposes t end poverty-specifically we interpret sdg 1 as meaning next girl or boy born has fair chance at free happy an productive life as we seek to make any community a child is born into a thriving space to grow up between discover of new worlds in 1500 and 1945 systems got worse and worse on the goal eg processes like slavery emerged- and ultimately the world was designed around a handful of big empires and often only the most powerful men in those empires. 4 amazing human-tech systems were invented to start massive use by 1960 borlaug agriculture and related solutions every poorest village (2/3people still had no access to electricity) could action learn person to person- deming engineering whose goal was zero defects by helping workers humanize machines- this could even allowed thousands of small suppliers to be best at one part in machines assembled from all those parts) – although americans invented these solution asia most needed them and joyfully became world class at them- up to 2 billion people were helped to end poverty through sharing this knowhow- unlike consuming up things actionable knowhow multiplies value in use when it links through every community that needs it the other two technologies space and media and satellite telecoms, and digital analytic power looked promising- by 1965 alumni of moore promised to multiply 100 fold efficiency of these core tech each decade to 2030- that would be a trillion tmes moore than was needed to land on the moon in 1960s. you might think this tech could improve race to end poverty- and initially it did but by 1990 it was designed around the long term goal of making 10 men richer than 40% poorest- these men also got involved in complex vested interests so that the vast majority of politicians in brussels and dc backed the big get bigger - often they used fake media to hide what they were doing to climate and other stuff that a world trebling in population size d\ - we the 3 generations children parents grandparents have until 2030 to design new system orbits gravitated around goal 1 and navigating the un's other 17 goals do you want to help/ 8 cities we spend most time helping students exchange sustainability solutions 2018-2019 BR0 Beijing Hangzhou: 

Girls world maps begin at B01 good news reporting with fazleabed.com  valuetrue.com and womenuni.com


online library of norman macrae--


MA1 AliBaba TaoBao

Ma 2 Ali Financial

Ma10.1 DT and ODPS

health catalogue; energy catalogue

Keynes: 2025now - jobs Creating Gen


how poorest women in world build

A01 BRAC health system,

A02 BRAC education system,

A03 BRAC banking system

K01 Twin Health System - Haiti& Boston

Past events EconomistDiary.com

include 15th annual spring collaboration cafe new york - 2022 was withsister city hong kong designers of metaverse for beeings.app

© 2024   Created by chris macrae.   Powered by

Report an Issue  |  Terms of Service