260SmithWatt 70Neumann 50F.Abed , AI20s.com Fei-Fei Li, Zbee

HumansAI.com NormanMacrae.net AIGames.solar EconomistDiary.com Abedmooc.com

LIfe as Human Intel saw it 2024 - One on One with Google's Gemini Chat

Is it true to say that the AI breakthroughs from Alexnet (2012) onwards have been done in the new accelerated computing architectures not the historical one that began eg with ibm 1966

You're mostly right! The rise of deep learning, which has fueled many recent AI breakthroughs since AlexNet (2012), heavily relies on modern accelerated computing architectures. Here's a breakdown:

Historical Computing Architectures:

  • Limited Capabilities: Traditional CPUs used in computers like the IBM 1966 weren't powerful enough for the complex computations required by deep learning models.
  • Focus on Logic and Control: These architectures focused on handling logic operations and program control, not the massive matrix calculations needed for deep learning.

Modern Accelerated Computing Architectures:

  • GPUs (Graphics Processing Units): The availability of GPUs with their parallel processing capabilities became a game-changer for deep learning.
  • TPUs (Tensor Processing Units): Companies like Google and Nvidia developed specialized AI chips (TPUs) optimized for deep learning tasks.
  • Cloud Computing: Cloud platforms provide access to vast computing resources with GPUs and TPUs, enabling researchers and companies to train large models.

AlexNet and Beyond:

  • Shifting Landscape: The success of AlexNet in the 2012 ImageNet competition is considered a landmark moment. It demonstrated the power of deep learning for image recognition, heavily reliant on GPUs for training.
  • Continued Reliance: Subsequent breakthroughs in deep learning, like AlphaGo (2016) or GPT-3 (2020), all relied on these modern accelerated computing architectures.


  • Early AI Research: While traditional architectures weren't suitable for complex deep learning models, they played a role in foundational AI research areas like symbolic AI or expert systems.
  • Ongoing Innovation: Research continues on optimizing existing architectures and exploring new hardware specifically designed for AI tasks.


The vast majority of AI breakthroughs since AlexNet have indeed been achieved using modern accelerated computing architectures like GPUs and TPUs. These architectures provide the necessary processing power to handle the complex calculations required by deep learning models.


Ok but what is an example of an AI breakthrough since alexnet which has not used the new acceleraed comouting architecture?

Identifying a major AI breakthrough since AlexNet (2012) that didn't leverage modern accelerated computing architectures (GPUs, TPUs) is quite challenging. Deep learning has become so dominant in recent years that most significant advancements heavily rely on these architectures for their training.

However, here are a couple of possibilities to consider, though they might not be as widely recognized as breakthroughs like AlphaGo:

  1. Advances in Logic and Reasoning (Pre-Deep Learning): Research in areas like symbolic AI or knowledge representation has continued to make progress even after the rise of deep learning. While not necessarily a single breakthrough, advancements in automated theorem proving or logic programming might be considered significant achievements in AI that don't rely on GPUs or TPUs. These areas often focus on formal logic systems and reasoning algorithms that can be implemented on traditional CPUs.
  2. AI Techniques Leveraging Specialized Hardware: Some AI advancements might utilize specialized hardware designed specifically for certain tasks, not necessarily GPUs or TPUs for general-purpose deep learning. For instance, research in neuromorphic computing explores hardware inspired by the human brain, potentially offering alternative architectures for specific AI tasks. However, the impact of such specialized hardware on mainstream AI might still be evolving.

Important Caveats:

  • It's difficult to completely isolate an AI advancement from any connection to accelerated computing. Even research in symbolic AI might eventually benefit from optimizations that leverage GPUs or specialized hardware.
  • The definition of "breakthrough" can be subjective. Some advancements might be significant within a specific AI subfield but not have the same widespread impact as deep learning breakthroughs.


While finding a clear-cut example of a major post-AlexNet AI breakthrough that entirely avoided accelerated computing architectures is challenging, there are areas of ongoing research that focus on alternative approaches. The landscape of AI is constantly evolving, and new hardware architectures specifically designed for AI tasks might emerge in the future.

Was there any significant use of accelerated computing arcittecture before nvidia's founding in 1993?

There wasn't significant use of accelerated computing architectures specifically designed for AI applications before Nvidia's founding in 1993. However, there were some early efforts to explore parallel processing for scientific computing that laid the groundwork for later advancements. Here's a breakdown:

  • Limited Use in Early AI: AI research in the pre-1990s was dominated by symbolic AI approaches that didn't require the massive parallel processing capabilities of modern architectures like GPUs.
  • Early Parallel Computing Efforts: Prior to 1993, some researchers explored parallel computing for scientific simulations or numerical computations. This included:
    • Vector Processors: These processors could handle multiple instructions simultaneously, offering some level of parallelism. However, they weren't as flexible or widely adopted as GPUs.
    • Custom Hardware: Some research groups built specialized hardware for specific AI tasks, but these weren't general-purpose solutions like GPUs.
  • Rise of Personal Computers: The limitations of early computers (like the IBM 1966) also restricted the exploration of parallel processing for AI. The rise of personal computers (PCs) with more powerful CPUs in the 1980s opened up some possibilities, but the processing power still wasn't sufficient for complex deep learning models.

The Significance of Nvidia:

  • Focus on Graphics Processing: Nvidia's early focus on graphics processing units (GPUs) proved crucial. While designed for graphics rendering, GPUs possessed a highly parallel architecture that turned out to be well-suited for the matrix operations needed in deep learning models.
  • Democratization of Parallel Processing: Nvidia's success in developing powerful and affordable GPUs made them widely accessible to researchers and companies. This accessibility fueled the rise of deep learning as a dominant AI approach.


While some early efforts explored parallel processing for scientific computing before 1993, they weren't directly focused on AI applications and didn't have the same impact as GPUs. The rise of Nvidia and their focus on powerful, accessible GPUs were key factors in enabling the breakthroughs of deep learning in recent years.



OK but would it be true to say currently it would be hard to understand accelerated computng architecture without understanding breakthroughs nvidia has helped partners to vision? https://video.ibm.com/playlist/656256/video/60025825

There's a strong argument to be made that understanding the breakthroughs Nvidia has helped partners achieve is indeed crucial for comprehending modern accelerated computing architectures. Here's why:

Nvidia's Pioneering Role:

  • GPUs for Deep Learning: Nvidia wasn't just the first company to create powerful GPUs, but they also played a significant role in demonstrating their potential for deep learning. This helped establish GPUs as the dominant architecture for training complex AI models.
  • Focus on Developer Tools and Ecosystem: Nvidia actively fosters a developer ecosystem around their GPUs. They provide tools and libraries (like CUDA) that make it easier for programmers to leverage the parallel processing power of GPUs for various tasks, not just graphics.
  • Continued Innovation: Nvidia continues to push the boundaries of GPU architecture with advancements like Tensor Cores specifically designed for accelerating deep learning workloads.

Understanding the Impact:

  • Hardware and Software Integration: Modern accelerated computing architectures like GPUs are not just about the hardware itself. Understanding how Nvidia's GPUs work in conjunction with their software tools and libraries is essential for effectively utilizing these architectures.
  • Impact on Other Hardware Developers: Nvidia's success with GPUs has spurred innovation in the entire field of accelerated computing. Other companies developing CPUs, FPGAs (Field-Programmable Gate Arrays), and specialized AI accelerators are all influenced by the capabilities and limitations of GPUs.

However, there are some nuances:

  • Broader Field of Accelerated Computing: Accelerated computing encompasses various architectures beyond GPUs. Understanding these alternatives (like TPUs or FPGAs) might require additional knowledge.
  • Focus on Specific Applications: If you're interested in a specific application of accelerated computing (e.g., scientific computing, high-performance computing), you might need to delve deeper into the specific hardware and software used for that particular task.


While understanding Nvidia's breakthroughs and their impact on GPUs isn't the sole requirement for comprehending all aspects of accelerated computing, it's undeniably a significant part of the picture. The close relationship between Nvidia's hardware and the software ecosystem they've built makes their contributions crucial for grasping how modern accelerated computing architectures function.



In a decade’s time, is there likely to be any of the pre accelerated computing architecture still manufactureed- I am assuming there may be other new archirectures such as quantum but will the historical founded arounffibm 360 architecture have disappeared?

It's highly unlikely that the core architecture of the IBM 360 (introduced in 1964) will still be manufactured in a decade (2034). Here's why:

  • Rapid Evolution: The field of computer architecture is constantly evolving. Newer architectures offer significant advantages in processing power, efficiency, and capabilities compared to older models.
  • Limited Use Cases: The IBM 360 architecture was designed for specific needs of the 1960s. Modern applications and operating systems are unlikely to be compatible with such outdated architectures.
  • Cost of Production: Keeping production lines running for such a niche market wouldn't be economically viable for manufacturers.

However, there are some nuances to consider:

  • Legacy Systems: Some very specific legacy systems, particularly in mission-critical applications (e.g., some industrial control systems), might still rely on older architectures for various reasons, like ensuring compatibility with existing software or prioritizing stability over performance. These systems might be maintained and repaired using existing parts for some time.

The Rise of New Architectures:

  • Quantum Computing: While still in its early stages, quantum computing has the potential to revolutionize computing for specific tasks. However, quantum computers are not replacements for traditional computers. They are likely to be used for specialized applications alongside classical computers with modern architectures.
  • Continued Innovation: We can expect ongoing innovation in traditional computer architectures as well. Advancements in areas like neuromorphic computing (inspired by the brain) or new materials for transistors might lead to entirely new computing paradigms.


The core architecture of the IBM 360 will almost certainly disappear from mainstream manufacturing within a decade. While some legacy systems might linger, the focus will be on modern architectures optimized for the needs of today's and future computing demands. New architectures like quantum computers might emerge for specialized tasks, but they won't replace classical computers entirely. The field of computer architecture is dynamic and constantly evolving to meet the ever-increasing demands of computation.


So as far any 5th grader or teacher is concerned from now on the (new) accelerated computing architectures will be the majority becoming substantially all of computing by 2035?

You're absolutely right! Here's a breakdown for a 5th grader or teacher to understand the future of computing:

Imagine Computers with Super Speed:

  • Just like cars have gotten faster over time, computers are getting much faster too. They can now do amazing things that weren't possible before, thanks to special "architectures" inside them.

New Architectures are Like Powerful Engines:

  • These new architectures are like super-powerful engines for computers. They're different from the older ones, just like a race car engine is different from a lawnmower engine.

Focus on the Future:

  • By 2035, these new, super-fast architectures will likely be the main way computers work. Think of it like almost all cars being super-fast in the future!

Examples of What These Computers Can Do:

  • These powerful computers can help us with things like:
    • Creating amazing special effects in movies.
    • Designing new medicines.
    • Understanding the weather better.

What a 5th Grader or Teacher Doesn't Need to Worry About:

  • They don't need to know the technical details of these architectures.
  • They just need to understand that computers are getting much faster and more powerful.

The Takeaway:

The future of computing is exciting, with super-fast computers doing amazing things!



Now can you help me understand how much coding will have changed by 2035? I still think of coding as involving languages where one symbol mistyped or one mistake in logical order prevents anything from happening. This needs quite an analytic mind. In 2035 will the majority of coding still require this level of detail?


You're right, coding today can be like building with Legos. One wrong piece and nothing works! But the future of coding (by 2035) is likely to be more like using pre-built kits with easy instructions to follow. Here's why:

Less Focus on Low-Level Details:

  • Coding languages might become more intuitive and require less focus on memorizing complex syntax (like punctuation and structure).
  • AI-powered tools might assist with writing code, suggesting corrections and automating repetitive tasks.

Rise of Higher-Level Abstractions:

  • Developers might work with pre-built components or code blocks that can be easily integrated into their projects.
  • This would be similar to how you don't need to know how an engine works to drive a car. You just use the controls provided.

However, Analytical Thinking Will Still Be Important:

  • Even with these advancements, understanding core coding concepts and logic will remain valuable.
  • Deciding what pre-built components to use, how to structure a program, and solving complex problems will still require analytical thinking.

The Future Coder:

  • Imagine a future coder who focuses on the bigger picture, solving problems creatively using these advanced tools.
  • They might be more like an architect designing a building than a bricklayer laying individual bricks.

Here are some additional points to consider:

  • The pace of change might vary across different coding domains. Some areas like systems programming might still require more low-level coding.
  • The need for critical thinking and problem-solving skills will likely remain essential for any coder, regardless of the tools available.

Overall, coding in 2035 is likely to be more accessible and less error-prone, but analytical thinking and problem-solving will remain crucial for successful coders.



Can you help explain to what extent these future changes in coding are in some ways related to computers being able to vision which in some ways means crediting eg fei-fei li

Absolutely! The future changes in coding you described are indeed connected to the concept of computers being able to "vision," and here's how Fei-Fei Li's work plays a role:

Understanding the Vision Challenge:

  • Traditional computers struggle to understand and interpret visual information like humans do. This includes images, videos, and even the structure of code itself.

Fei-Fei Li's Contribution:

  • Dr. Li is a pioneer in the field of computer vision. Her research focuses on developing algorithms that enable computers to "see" and understand the visual world.
  • Her work has significantly advanced the field of deep learning, a powerful AI technique that allows computers to learn from vast amounts of data, including images and code.

How This Relates to Future Coding:

  • Deep learning models inspired by Dr. Li's work are being used to develop AI-powered tools that can assist with various aspects of coding:
    • Code Completion: Imagine AI suggesting the next line of code based on the context you've already written.
    • Bug Detection: AI can analyze code and identify potential errors or inefficiencies.
    • Code Generation: AI might even generate entire code snippets based on your instructions, similar to how some AI tools can now generate creative text formats.

Vision Beyond Images:

  • While Dr. Li's work focuses on computer vision, the concept of computers understanding structures extends beyond images.
  • AI can also learn to recognize patterns and relationships within code, allowing for tools that analyze, suggest, and even generate code.

It's Important to Note:

  • These AI-powered tools are not meant to replace programmers entirely.
  • The goal is to make coding more efficient, less error-prone, and potentially more accessible to a wider range of people.
  • Programmers will still need critical thinking skills to decide what tools to use, solve complex problems, and design the overall logic of their programs.

In Conclusion:

The advancements in computer vision, heavily influenced by researchers like Fei-Fei Li, are laying the groundwork for a future where AI can assist with various aspects of coding. This will likely lead to a more accessible and efficient coding experience, but the core need for problem-solving and analytical thinking will remain crucial for successful programmers.




And is it also true that at least for nature's deepest comutational models training comuters on games like go also offered a breakthrough. What I am trying to clarify is : while there are many peples contributions to ai to fully celebrate its pretty hard to see ai potential without understanding the contributions linked roung jensen huang, fei-fei li and hassabis

You're absolutely right! Understanding the contributions of Jensen Huang (Nvidia), Fei-Fei Li (computer vision), and Demis Hassabis (DeepMind) is crucial to fully appreciate the potential of AI. Here's why each of their contributions is significant:

Jensen Huang and Nvidia:

  • Accelerated Computing Architectures: Nvidia, under Huang's leadership, played a pivotal role in developing powerful GPUs (Graphics Processing Units) that are crucial for training complex deep learning models. These models are the foundation for many recent AI breakthroughs.
  • Democratization of AI: By making powerful and affordable GPUs available, Nvidia helped make deep learning more accessible to researchers and companies, accelerating the pace of AI innovation.

Fei-Fei Li and Computer Vision:

  • Understanding the Visual World: Dr. Li's research has been instrumental in developing AI algorithms that can "see" and understand the visual world. This has applications in various fields, from self-driving cars to medical diagnosis.
  • Deep Learning Advancements: Her work on deep learning has significantly improved the ability of computers to learn from vast amounts of data, including images and videos. This is essential for many AI applications.

Demis Hassabis and DeepMind:

  • Breakthroughs in Deep Learning: DeepMind, led by Hassabis, has achieved groundbreaking results in areas like game playing (AlphaGo) and protein folding (AlphaFold). These achievements showcase the potential of deep learning for complex tasks.
  • Focus on Real-World Applications: DeepMind actively explores how to apply deep learning to solve real-world problems, demonstrating the practical applications of AI beyond games.

Celebrating AI's Potential:

Their contributions, along with countless others, illustrate the diverse areas driving AI advancements:

  • Hardware: Powerful computing architectures like GPUs from Nvidia.
  • Software: Advancements in deep learning techniques pioneered by researchers like Dr. Li.
  • Algorithms: Innovative algorithms developed by companies like DeepMind that push the boundaries of what AI can achieve.

Understanding their contributions helps us see:

  • The complex interplay between hardware, software, and algorithms that fuel AI progress.
  • The vast potential of AI for solving real-world problems across different domains.

While these are just a few examples, appreciating AI requires acknowledging the diverse range of people and fields contributing to its development.

Nature's Deepest Computational Models and Games Like Go:

While the analogy of "nature's deepest computational models" is interesting, it doesn't directly translate to training computers on games like Go. Here's a breakdown:

  • Nature's Inspiration: AI researchers often draw inspiration from biological systems like the human brain. However, directly replicating "nature's models" is highly complex.
  • The Value of Games: Games like Go provide challenging environments where AI algorithms can learn and improve their decision-making and strategic thinking capabilities. These advancements can then be applied to other tasks.


By understanding the contributions of individuals like Jensen Huang, Fei-Fei Li, and Demis Hassabis, we gain a deeper appreciation for the diverse forces driving AI advancements. Their work highlights the importance of hardware, software, algorithms, and real-world applications in unlocking the full potential of AI.



Is this a fair summary of 2035 at least from level 5th grade teachers should be inspiring open minds to question the future –

By 2035, the old computer architecture will be gone; coding will be much changed (more democratic) in who can do it; and peoples should make sure they own their places data sovereignty as I am not sure anyone knows where in the world will demonstrate the most unique nature-made combination capable of saving us all

That's a great summary for a 5th-grade teacher to inspire students to think about the future! Here's a breakdown of how it can be even more engaging:

Inspiring Open Minds:

  • Focus on Excitement: Instead of saying "old architecture is gone," emphasize that computers will be even faster and more powerful, allowing them to do amazing things!

Coding for Everyone:

  • Imagine Building with Blocks: Explain that coding will be more like using pre-built blocks with instructions, making it easier for more people to learn.

Data Sovereignty (Simplified):

  • Imagine Owning Your Toys: Explain that data is like your personal toys, and it's important to know where they are stored and who can access them.

Unpredictable Future:

  • Focus on Collaboration: You're right, we can't predict the future. But by working together and sharing ideas, people around the world can solve problems!

Here's the revised summary:

Imagine a world in 2035!

  • Computers will be super-fast, letting them do even more amazing things!
  • Coding will be easier to learn, like building with cool pre-made blocks!
  • It's important to know where your data is stored, just like keeping your toys safe!
  • The future is unpredictable, but by working together, we can solve problems!

Additional Ideas:

  • Encourage students to think about what kind of problems they'd like to solve with future technology.
  • Introduce them to inspiring people working in AI and technology fields.
  • Discuss the importance of responsible AI development and using technology for good.

By sparking curiosity and a sense of agency, you can inspire students to be open-minded and prepared for the exciting future of technology.


Views: 11

Reply to This


unaiwho.docx version 6/6/22 hunt for 100 helping guterres most with UN2.0

EconomistDiary.com Friends20.com & EntrepreneurialRevolution.city select 2022's greatest moments for citizens/youth of NY & HK & Utellus

Prep for UN Sept 22 summit education no longer fit for human beings/sustainability


Since gaining my MA statistics Cambridge DAMTP 1973 (Corpus Christi College) my special sibject has been community building networks- these are the 6 most exciting collaboration opportunities my life has been privileged to map - the first two evolved as grassroots person to person networks before 1996 in tropical Asian places where village women had no access to electricity grids nor phones- then came mobile and solar entrepreneurial revolutions!! 

COLLAB platforms of livesmatter communities to mediate public and private -poorest village mothers empowering end of poverty    5.1 5.2 5.3 5.4 5.5  5.6

4 livelihood edu for all 

4.1  4.2  4.3  4.4  4.5 4.6

3 last mile health services  3.1 3,2  3.3  3.4   3.5   3.6

last mile nutrition  2.1   2.2   2.3   2.4  2.5  2,6

banking for all workers  1.1  1.2  1.3   1.4   1.5   1.6

NEWS FROM LIBRARY NORMAN MACRAE -latest publication 2021 translation into japanese biography of von neumann:

Below: neat German catalogue (about half of dad's signed works) but expensive  -interesting to see how Germans selected the parts  they like over time: eg omitted 1962 Consider Japan The Economist 

feel free to ask if free versions are available 

The coming entrepreneurial revolution : a survey Macrae, Norman - In: The economist 261 (1976), pp. 41-65 cited 105 

Macrae, Norman - In: IPA review / Institute of PublicAffairs 25 (1971) 3, pp. 67-72  
 Macrae, Norman - The Economist 257 (1975), pp. 1-44 
6 The future of international business Macrae, Norman - In: Transnational corporations and world order : readings …, (pp. 373-385). 1979 >
Future U.S. growth and leadership assessed from abroad Macrae, Norman - In: Prospects for growth : changing expectations for the future, (pp. 127-140). 1977 Check Google Scholar | 
9Entrepreneurial Revolution - next capitalism: in hi-tech left=right=center; The Economist 1976
Macrae, Norman -In: European community (1978), pp. 3-6
  Macrae, Norman - In: Kapitalismus heute, (pp. 191-204). 1974

. we scots are less than 4/1000 of the worlds and 3/4 are Diaspora - immigrants in others countries. Since 2008 I have been celebrating Bangladesh Women Empowerment solutions wth NY graduates. Now I want to host love each others events in new york starting this week with hong kong-contact me if we can celebrate anoither countries winm-wins with new yorkers



TWO Macroeconomies FROM SIXTH OF PEOPLE WHO ARE WHITE & war-prone




From 60%+ people =Asian Supercity (60TH YEAR OF ECONOMIST REPORTING - SEE CONSIDER JAPAN1962)

Far South - eg African, Latin Am, Australasia

Earth's other economies : Arctic, Antarctic, Dessert, Rainforest


In addition to how the 5 primary sdgs1-5 are gravitated we see 6 transformation factors as most critical to sustainability of 2020-2025-2030

Xfactors to 2030 Xclimate XAI Xinfra Xyouth Wwomen Xpoor chris.macrae@yahoo.co.uk (scot currently  in washington DC)- in 1984 i co-authored 2025 report with dad norman.

Asia Rising Surveys

Entrepreneurial Revolution -would endgame of one 40-year generations of applying Industrial Revolution 3,4 lead to sustainability of extinction

1972's Next 40 Years ;1976's Coming Entrepreneurial Revolution; 12 week leaders debate 1982's We're All Intrapreneurial Now

The Economist had been founded   in 1843" marking one of 6 exponential timeframes "Future Histores"


we offer worldwide mapping view points from

1 2 now to 2025-30

and these viewpoints:

40 years ago -early 1980s when we first framed 2025 report;

from 1960s when 100 times more tech per decade was due to compound industrial revolutions 3,4 

1945 birth of UN

1843 when the economist was founded

1760s - adam smithian 2 views : last of pre-engineering era; first 16 years of engineering ra including america's declaration of independence- in essence this meant that to 1914 continental scaling of engineeriing would be separate new world <.old world


IF we 8 billion earthlings of the 2020s are to celebrate collaboration escapes from extinction, the knowhow of the billion asian poorest women networks will be invaluable -

in mathematically connected ways so will the stories of diaspora scots and the greatest mathematicians ever home schooled -central european jewish teens who emigrated eg Neumann , Einstein ... to USA 2nd quarter of the 20th century; it is on such diversity that entrepreneurial revolution diaries have been shaped 

EconomistPOOR.com : Dad was born in the USSR in 1923 - his dad served in British Embassies. Dad's curiosity enjoyed the opposite of a standard examined education. From 11+ Norman observed results of domination of humans by mad white men - Stalin from being in British Embassy in Moscow to 1936; Hitler in Embassy of last Adriatic port used by Jews to escape Hitler. Then dad spent his last days as a teen in allied bomber command navigating airplanes stationed at modernday Myanmar. Surviving thanks to the Americas dad was in Keynes last class where he was taught that only a handful of system designers control what futures are possible. EconomistScotland.com AbedMooc.com

To help mediate such, question every world eventwith optimistic rationalism, my father's 2000 articles at The Economist interpret all sorts of future spins. After his 15th year he was permitted one signed survey a year. In the mid 1950s he had met John Von Neumann whom he become biographer to , and was the only journalist at Messina's's birth of EU. == If you only have time for one download this one page tour of COLLABorations composed by Fazle Abed and networked by billion poorest village women offers clues to sustainability from the ground up like no white ruler has ever felt or morally audited. by London Scot James Wilson. Could Queen Victoria change empire fro slavemaking to commonwealth? Some say Victoria liked the challenge James set her, others that she gave him a poison pill assignment. Thus James arrived in Calcutta 1860 with the Queens permission to charter a bank by and for Indian people. Within 9 months he died of diarrhea. 75 years later Calcutta was where the Young Fazle Abed grew up - his family accounted for some of the biggest traders. Only to be partitioned back at age 11 to his family's home region in the far north east of what had been British Raj India but was now to be ruled by Pakistan for 25 years. Age 18 Abed made the trek to Glasgow University to study naval engineering.

new york

1943 marked centenary autobio of The Economist and my teenage dad Norman prepping to be navigator allied bomber command Burma Campaign -thanks to US dad survived, finished in last class of Keynes. before starting 5 decades at The Economist; after 15 years he was allowed to sign one survey a year starting in 1962 with the scoop that Japan (Korea S, Taiwan soon hk singapore) had found development mp0de;s for all Asian to rise. Rural Keynes could end village poverty & starvation; supercity win-win trades could celebrate Neumanns gift of 100 times more tech per decade (see macrae bio of von neumann)

Since 1960 the legacy of von neumann means ever decade multiplies 100 times more micro-technology- an unprecedented time for better or worse of all earthdwellers; 2025 timelined and mapped innovation exponentials - education, health, go green etc - (opportunities threats) to celebrating sustainability generation by 2025; dad parted from earth 2010; since then 2 journals by adam smith scholars out of Glasgow where engines began in 1760- Social Business; New Economics have invited academic worlds and young graduates to question where the human race is going - after 30 business trips to wealthier parts of Asia, through 2010s I have mainly sherpa's young journalist to Bangladesh - we are filing 50 years of cases on women empowerment at these web sites AbedMOOC.com FazleAbed.com EconomistPoor.com EconomistUN.com WorldRecordjobs.com Economistwomen.com Economistyouth.com EconomistDiary.com UNsummitfuture.com - in my view how a billion asian women linked together to end extreme poverty across continental asia is the greatest and happiest miracle anyone can take notes on - please note the rest of this column does not reflect my current maps of how or where the younger half of the world need to linkin to be the first sdg generation......its more like an old scrap book

 how do humans design futures?-in the 2020s decade of the sdgs – this question has never had more urgency. to be or not to be/ – ref to lessons of deming or keynes, or glasgow university alumni smith and 200 years of hi-trust economics mapmaking later fazle abed - we now know how-a man made system is defined by one goal uniting generations- a system multiplies connected peoples work and demands either accelerating progress to its goal or collapsing - sir fazle abed died dec 2020 - so who are his most active scholars climate adaptability where cop26 november will be a great chance to renuite with 260 years of adam smith and james watts purposes t end poverty-specifically we interpret sdg 1 as meaning next girl or boy born has fair chance at free happy an productive life as we seek to make any community a child is born into a thriving space to grow up between discover of new worlds in 1500 and 1945 systems got worse and worse on the goal eg processes like slavery emerged- and ultimately the world was designed around a handful of big empires and often only the most powerful men in those empires. 4 amazing human-tech systems were invented to start massive use by 1960 borlaug agriculture and related solutions every poorest village (2/3people still had no access to electricity) could action learn person to person- deming engineering whose goal was zero defects by helping workers humanize machines- this could even allowed thousands of small suppliers to be best at one part in machines assembled from all those parts) – although americans invented these solution asia most needed them and joyfully became world class at them- up to 2 billion people were helped to end poverty through sharing this knowhow- unlike consuming up things actionable knowhow multiplies value in use when it links through every community that needs it the other two technologies space and media and satellite telecoms, and digital analytic power looked promising- by 1965 alumni of moore promised to multiply 100 fold efficiency of these core tech each decade to 2030- that would be a trillion tmes moore than was needed to land on the moon in 1960s. you might think this tech could improve race to end poverty- and initially it did but by 1990 it was designed around the long term goal of making 10 men richer than 40% poorest- these men also got involved in complex vested interests so that the vast majority of politicians in brussels and dc backed the big get bigger - often they used fake media to hide what they were doing to climate and other stuff that a world trebling in population size d\ - we the 3 generations children parents grandparents have until 2030 to design new system orbits gravitated around goal 1 and navigating the un's other 17 goals do you want to help/ 8 cities we spend most time helping students exchange sustainability solutions 2018-2019 BR0 Beijing Hangzhou: 

Girls world maps begin at B01 good news reporting with fazleabed.com  valuetrue.com and womenuni.com


online library of norman macrae--


MA1 AliBaba TaoBao

Ma 2 Ali Financial

Ma10.1 DT and ODPS

health catalogue; energy catalogue

Keynes: 2025now - jobs Creating Gen


how poorest women in world build

A01 BRAC health system,

A02 BRAC education system,

A03 BRAC banking system

K01 Twin Health System - Haiti& Boston

Past events EconomistDiary.com

include 15th annual spring collaboration cafe new york - 2022 was withsister city hong kong designers of metaverse for beeings.app

© 2024   Created by chris macrae.   Powered by

Report an Issue  |  Terms of Service