260SmithWatt 70Neumann 50F.Abed , AI20s.com Fei-Fei Li, Zbee

HumansAI.com NormanMacrae.net AIGames.solar EconomistDiary.com Abedmooc.com


From Idea To Crowd: Manipulating Local LLMs At Scale (Presented by Lablup) [S62992]

, Chief Executive Officer, Lablup

, Chief Technology Officer, Lablup, Inc. at nvidia gtc 2024 march

LLM descriptions by GEMINI

Falcon-180B,  a large language model (LLM) developed by the Technology Innovation Institute (TII) in Abu Dhabi. It's the largest model within the Falcon LLM family.

Here's a breakdown of key points about Falcon-180B:

  • Size and Capabilities: With 180 billion parameters, it's one of the largest publicly available LLMs, potentially offering superior performance in tasks like text generation, translation, question answering, and code generation compared to smaller models.

  • Open-Source: Like other Falcon LLMs, Falcon-180B is open-source, allowing researchers and developers to access and modify the underlying code, fostering innovation in the field of AI.

  • Accessibility: The open-source nature removes licensing fees, making it potentially more accessible for non-commercial users compared to some closed-source LLMs.

  • Challenges: Running and training large models like Falcon-180B requires significant computing resources. Specialized hardware like GPUs might be needed, and training can be expensive.

  • Competition: While large, Falcon-180B faces competition from other large open-source models like Megatron-Turing NLG from Google AI, and closed-source models like GPT-4 from OpenAI.


Falcon-180B is a powerful open-source LLM with the potential to advance various AI applications. However, its large size presents challenges in terms of computational resources required for training and running the model.

Here are some additional points to consider:

  • Fine-Tuning: Falcon-180B is a pre-trained model, but it can be further fine-tuned for specific tasks using specialized datasets.

  • Community and Support: The open-source nature allows for a larger developer community to contribute to the model's improvement and create tools for easier use.

  • Ethical Considerations: As with all large language models, ethical considerations like potential bias and misuse need to be addressed when using Falcon-180B.

I hope this information is helpful! Let me know if you have any other questions about Falcon-180B or large language models in general

here isn't a single, completely open-source Western LLM that directly compares to the size and capabilities of Falcon-180B (180 billion parameters) at this time (April 2024). Here's a breakdown of the landscape:

Western Open-Source LLMs:

  • Several open-source LLMs are developed in the West, but they might not be as large as Falcon-180B. Here are some examples:
    • Megatron-Turing NLG (MT-NLG) by Google AI: This is a large open-source LLM with 530 billion parameters, but it's still under development and not yet publicly available in its entirety. Smaller versions with 137B and 530B parameters are available for research use.
    • Jurassic-1 Jumbo by AI21 Labs: This model has 178B parameters and is open-source, but it might not be as powerful as Falcon-180B due to differences in architecture and training data.
    • EleutherAI Models: This organization offers various open-source LLM options with different parameter sizes, but the largest ones are still smaller than Falcon-180B.

Reasons for Size Discrepancies:

  • Training Costs: Training massive LLMs like Falcon-180B requires significant computational resources, which can be expensive. Western companies might be more cautious about releasing extremely large models entirely open-source due to these costs.
  • Focus on Efficiency: Some Western researchers might prioritize developing smaller, more efficient LLMs that achieve good performance with fewer parameters.
  • Commercial Interests: Some Western companies might keep their most advanced LLMs closed-source for commercial applications.

Alternatives and Considerations:

  • While a single, completely open-source Western model might not directly match Falcon-180B yet, the West still contributes significantly to open-source LLMs.
  • The smaller, open-source Western models mentioned earlier could still be valuable for research and development purposes.
  • Western companies might offer access to very large LLMs through cloud-based services, even if the underlying code isn't entirely open-source.


The landscape of open-source LLMs is constantly evolving. While the West might not have a completely open-source model matching Falcon-180B's size yet, there are significant contributions and alternative options available.

Views: 8

Reply to This

Replies to This Discussion

Llama2 : Meta is a family of large language models (LLMs) developed by Meta (formerly Facebook) AI. It's a significant player in the open-source LLM scene, offering capabilities similar to other well-known models like GPT-3 and PaLM 2. Here's a breakdown of key points about Llama2:

Open-Source and Accessibility:

  • One of Llama2's key advantages is being entirely open-source. This allows anyone to access and potentially modify the underlying code, fostering research and innovation in the field of AI.
  • The open-source nature also removes licensing fees, making it potentially more accessible for non-commercial users compared to some closed-source models.

Multiple Model Sizes:

  • Llama2 comes in various sizes to cater to different needs and computing resources:
    • 7 billion parameters
    • 13 billion parameters
    • 70 billion parameters
    • (There was also a 34 billion parameter model not publicly released)

Focus on Safety and General-Purpose Tasks:

  • Meta AI emphasizes the safety aspects of Llama2 during its development.
  • It's a general-purpose LLM designed for various tasks like:
    • Text generation (like writing different kinds of creative content)
    • Question answering
    • Text summarization
    • Machine translation

Chat-Optimized Variant - Llama2-chat:

  • In addition to the general-purpose models, Llama2 also offers a "chat" optimized variant called Llama2-chat.
  • This version is specifically fine-tuned for chatbot-like interactions through techniques like supervised fine-tuning and reinforcement learning with human feedback (RLHF). This can potentially lead to more natural and engaging conversations in chat applications.

Comparison to Other LLMs:

  • While Llama2 boasts similar capabilities to GPT-3 and PaLM 2, there might be subtle differences in performance depending on the specific task and evaluation metrics.
  • The open-source nature of Llama2 allows for more transparency and community-driven improvements.


  • You can't directly access Llama2 through a user-friendly interface like some other models. However, you can try it out through Hugging Face, a platform that provides access to various open-source AI models. Hugging Face offers different versions of Llama2-chat for experimentation.


Llama2 is a promising open-source LLM with a focus on accessibility, safety, and general-purpose capabilities. Its various sizes and chat-optimized variant cater to different needs. While it might not be the single most powerful LLM, its open-source nature and ongoing development make it a valuable addition to the AI landscape.

Here are some resources for further exploration:

Do you have any other questions about Llama2 or large language models in general?

Claude2 Anthropic

What can you tell me about Claude2

Claude2 is a large language model (LLM) developed by Anthropic, a research company founded by former OpenAI researchers. Here's a breakdown of its key features and potential:

Improved Performance:

  • One of the key claims about Claude2 is its improved performance compared to its predecessor, Claude 1.3. It reportedly performs better on various standard benchmarks used to test LLMs.

Longer Responses and Stronger Memory:

  • Users report that Claude2 is capable of generating longer responses and seems to have a "longer memory" compared to some other LLMs. This allows it to maintain context over a conversation and potentially provide more coherent and informative answers.

Focus on Safety and Explainability:

  • Anthropic emphasizes the safety and explainability aspects of Claude2. The model is designed to be less likely to produce harmful or misleading outputs compared to some earlier LLMs.

Limited Public Availability:

Competition and Comparisons:

  • Claude2 competes with other leading LLMs like GPT-4 from OpenAI and Jurassic-1 Jumbo from AI21 Labs. It's difficult to say definitively which one is superior as performance can vary depending on the specific task and evaluation metrics.

Uncertainties and Ongoing Development:

  • The full capabilities of Claude2 are still emerging as it's under ongoing development. More information about specific strengths, weaknesses, and potential applications might become available in the future.

Ethical Considerations:

  • As with all large language models, ethical considerations like potential bias and misuse need to be addressed when developing and using Claude2. Anthropic seems to prioritize safety aspects, but ongoing research and development are crucial.


Claude2 is a promising LLM with reported improvements in performance, memory, and safety compared to its predecessor. Its limited public availability makes it difficult to fully assess its capabilities, but Anthropic's focus on safety and explainability is encouraging. As Claude2 and other LLMs continue to evolve, responsible development and deployment will be crucial.

Here are some resources for further exploration:

Am i correct in thinking that Amazon is one of the funders of Anthropic- what details on anthropic funding are available?

You're partially correct. Anthropic's funding history is a bit complex, and Amazon isn't currently listed as one of their major financial backers. Here's a breakdown of what's publicly known:

Anthropic Funding:

  • Anthropic has received funding from several sources, but the details aren't always fully public.

  • In early 2022, Anthropic secured a significant investment of $580 million, reportedly led by cryptocurrency billionaire Sam Bankman-Fried through FTX. However, due to FTX's recent collapse, the status of these funds is unclear.

  • There have been reports of Anthropic being in talks to raise another $750 million in fresh capital in a Menlo Ventures-led deal. The finalization and specific participants in this potential funding round are unknown at this time (April 9, 2024).

  • Other known Anthropic investors include:

    • Open Philanthropy Project
    • Claude Kaufmann (early investor in Stripe)

Limited Public Information:

Due to the nature of private investment rounds, details about specific funders and amounts can be confidential.

Focus on Effective Altruism:

Anthropic's founders have connections to the Effective Altruism movement, which emphasizes using resources to maximize positive impact on the world. This could influence their funding strategy and partnerships.

Possible Future Transparency:

As Anthropic continues to grow, they might become more transparent about their funding sources.


While Amazon isn't publicly listed as a current Anthropic donor, the company's funding history is somewhat opaque. They might be seeking new funding, and future announcements could provide more clarity.share

Overall is anthropic a business these days (ie not a non-profit)

Anthropic is most likely a for-profit business, but definitive information can be elusive. Here's what we can gather:
  • Limited Public Information: Details about Anthropic's legal structure and financial operations aren't readily available. There's no official confirmation of their status as a for-profit or non-profit entity.

  • Business Practices: Many aspects point towards Anthropic functioning as a business:

    • Investment Focus: Their funding history involves significant investments from venture capitalists typically seeking financial returns.
    • API Access: They offer access to their LLM, Claude, through an API, potentially indicating a revenue-generating model.
    • Employee Structure: Anthropic likely has employees with salaries and benefits, suggesting a for-profit structure.
  • Effective Altruism Connection: Anthropic's founders' association with Effective Altruism creates some ambiguity. This movement prioritizes maximizing positive social impact, which could be a goal alongside financial sustainability.

Possible Scenario:

  • Anthropic might be a for-profit business with a social mission, potentially structured as a benefit corporation. Benefit corporations are legal entities that can pursue profit while also considering social and environmental goals.


While a definitive answer requires more transparency from Anthropic, all signs point towards them being a for-profit business, possibly with a social mission influenced by Effective Altruism principles.

Where is HQ of anthropic?

Show drafts


Anthropic's headquarters is located in San Francisco, California. Here's how you can find this information:

  • Company Websites: Often, companies prominently display their headquarters location on their websites. You can find Anthropic's headquarters address on their website under the "About Us" or "Contact" section. While I cannot access and process information directly from websites due to safety reasons, a web search for "Anthropic headquarters" or by visiting https://www.anthropic.com/ 

Is Nvidia a funder of a major LLM - if so which and what can you tell me about that LLM?

Show drafts


Nvidia isn't directly listed as a major funder for any specific LLM (Large Language Model) project as of April 9, 2024. However, Nvidia plays a crucial role in enabling LLM development through their powerful graphics processing units (GPUs) used for training and running these complex models. Here's a breakdown:

Nvidia's Role in LLMs:

  • Hardware Foundation: Most state-of-the-art LLMs rely on GPUs for training due to their parallel processing capabilities that can handle the massive datasets required. Nvidia's high-performance GPUs like the A100 and H100 are widely used for LLM training.

  • Partnerships with LLM Developers: While not directly funding specific LLMs, Nvidia collaborates with various LLM developers, including:

    • Cloud service providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) that offer LLM access through cloud services powered by Nvidia GPUs.
    • Research labs or companies developing LLMs, such as OpenAI or Hugging Face, that might utilize Nvidia GPUs for training or experimentation.
  • Focus on LLM Acceleration: Nvidia actively develops tools and optimizations within their frameworks like cuDNN and TensorRT to specifically accelerate LLM training and inference on their GPUs. This makes Nvidia GPUs an attractive choice for researchers and developers working on LLMs.

Examples of LLM Projects Using Nvidia GPUs:

  • OpenAI's GPT-3: While the specific details of their training hardware aren't publicly known, OpenAI likely utilizes powerful GPUs, potentially including those from Nvidia, for training their models.

  • Pathway Language Model by Google AI: Similar to GPT-3, the details of specific hardware used aren't fully public, but Google likely leverages TPUs (Tensor Processing Units) and potentially GPUs for training models like Pathway, which rely on massive datasets.

  • LLMs on Cloud Platforms: Cloud platforms like AWS, Azure, and GCP offer access to pre-trained LLMs or training environments. These platforms often utilize Nvidia GPUs for their high-performance computing instances.


While Nvidia doesn't directly fund specific LLMs, their GPUs are the backbone of training and running these models. Their partnerships with LLM developers and cloud service providers solidify their position as a key enabler in the LLM revolution.

There's no public information indicating Apple as a major funder of a specific large language model (LLM) as of today, April 9, 2024. However, Apple does actively participate in the field of artificial intelligence (AI) and machine learning (ML) in a few ways that might be relevant to LLMs:

  • Internal AI Research and Development: Apple has its own AI research team, Apple AI, that focuses on various areas like computer vision, natural language processing (NLP), and machine learning. It's possible they're developing their own LLMs for internal use within Apple products or services, but these projects might not be publicly announced.

  • Focus on Machine Learning for Devices: Apple is known for integrating machine learning capabilities into its devices like iPhones, iPads, and Macs. This could involve smaller, focused machine learning models for tasks like image recognition or voice assistants, rather than large language models requiring massive datasets.

  • Partnerships with LLM Providers: It's possible that Apple might partner with companies offering LLM access through cloud services, but there haven't been any major announcements about such collaborations.

Here's a comparison with other companies you mentioned:

  • Nvidia: Focuses on providing the hardware foundation (GPUs) for LLM training and inference.
  • Anthropic: Develops its own LLM (Claude) and seeks funding to support its research.


While Apple isn't publicly known for directly funding major LLMs, they are involved in AI research and utilize machine learning in their devices. Their future strategy regarding LLMs remains to be seen, but they might focus on internal development or partnerships with existing LLM providers.

Earlier you mentioned that Falcon LLM in UAE is completely open source? Is any other major country or language building a leading open source LLM and how for example would an 8th trade teacher encourage students to see what sorts of solutions open source llm people are working on. eg would i be correct in hypothesising that open source llm may be able to lead some advances in ai as personalied learning agent? Also i believe that there may now be 15 plus NAIRR but is UAE putting more into nairr than anywhere else because of this ope source approach

You're right on track! Let's break down your questions about open-source LLMs:

Falcon LLM and Open Source

There isn't a definitive confirmation yet on whether Falcon LLM is entirely open-source. Open-source projects typically make their source code and training data freely available. While Falcon is known for its focus on Arabic language understanding, more information is needed to confirm its complete open-source status.

Other Leading Open-Source LLMs

The open-source LLM landscape is thriving! Here are some prominent examples:

  • Jurassic-1 Jumbo (J-1 Jumbo): This English LLM boasts impressive capabilities and is backed by researchers like LaMDA creators, Google AI. https://huggingface.co/models?p=136&sort=likes
  • Megatron-Turing NLG (MT-NLG): Developed by researchers from NVIDIA, Microsoft, and others, MT-NLG is another strong contender in the open-source LLM arena. https://github.com/NVIDIA/Megatron-LM
  • WuDao 2.0: This Chinese LLM is known for its focus on understanding and generating Chinese text. While not entirely open-source, it has a partially open framework for researchers.

Open-Source LLMs and Personalized Learning

Your hypothesis is spot on! Open-source LLMs hold immense potential for personalized learning:

  • Tailored Learning Paths: LLMs can analyze student data and learning styles to recommend personalized learning materials and exercises.
  • Real-Time Feedback: Imagine an LLM acting as a virtual tutor, providing immediate feedback and answering questions tailored to the student's needs.
  • Adaptive Learning Systems: LLMs can power adaptive learning systems that adjust the difficulty and content based on a student's progress.

An 8th Grade Teacher's Role

As an 8th-grade trade teacher, you can spark your students' interest in open-source LLMs like this:

  • Project-Based Learning: Challenge your students to research and present on specific open-source LLM projects and their applications in various fields, including education.
  • Guest Speaker: Invite a researcher or developer working on open-source LLMs to talk to your class about this exciting field.
  • Creative Thinking: Encourage students to brainstorm how open-source LLMs could be used to improve learning experiences in your specific trade.

NAIRR and Open-Source LLMs

While the exact number of NAIRR initiatives might be more than 15, it's difficult to say definitively if the UAE's investment in open-source LLMs surpasses other countries. However, the open-source approach does offer several advantages:

  • Collaboration and Innovation: Open-source fosters collaboration among researchers globally, accelerating advancements in LLM technology.
  • Accessibility and Transparency: Open-source models allow wider access and transparency, enabling researchers to build upon existing work and improve the models further.

The UAE's focus on open-source LLMs could position them as a leader in fostering global LLM development.

I get a bit confused between the view of an llm as a model and its links to an underlying structured databank. To the extent that czi seems to be saying it aims to bring together all rna codes , is it likely that czei may need to develop an LLM approach to RNA data (indeed is RNA a language?)

You're right, the connection between LLMs and data structures can be confusing. Here's a breakdown to clarify:

LLMs as Models vs. Structured Databases

  • LLMs (Large Language Models): These are statistical models trained on massive amounts of text data. They can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Think of them as complex algorithms that learn patterns from text data.
  • Structured Databases: These are organized collections of data where information is stored in a predefined format (tables, rows, columns) with clear relationships between data points. They are efficient for storing and retrieving specific information.

CZI's RNA Library and LLMs

  • CZI's RNA Library: This likely refers to a massive collection of RNA sequences, the digital representation of RNA molecules. It's essentially a structured database of RNA data.
  • LLMs for RNA Data (RNA as a Language): RNA itself isn't a language in the traditional sense. It doesn't have grammar or syntax like human languages. However, the sequences of nucleotides in RNA can be analyzed for patterns and relationships.

CZI and LLM Approach

While CZI might not directly use an LLM for RNA data in the same way they're used for text, they may explore LLM-inspired techniques:

  • Sequence Analysis: LLMs excel at identifying patterns in sequences. CZI could leverage similar techniques to find patterns and relationships within RNA sequences, potentially revealing new insights into cellular functions.
  • Generative Models: LLMs can generate new text based on the patterns they learn. CZI might explore similar models to explore possible RNA sequences or predict how RNA sequences might interact.

Here's a more likely scenario:

CZI will likely use a combination of techniques for analyzing their RNA library, including:

  • Machine Learning Algorithms: These algorithms can be trained on the RNA data to identify patterns and relationships between different RNA sequences and cellular functions.
  • Statistical Analysis: Statistical methods can be used to analyze the vast amount of data and identify significant correlations between RNA expression and cellular states.

These approaches can be very powerful for unlocking the secrets hidden within RNA data.

That's a great observation! Priscilla Chan's metaphor of building with RNA like Legos is apt, but there are some key differences between manipulating RNA and playing with language:

Building with RNA (Legos):

  • Specific Components: There are four basic building blocks in RNA (nucleotides), and they have to be connected in a specific order to function correctly. This is similar to Legos, where specific pieces fit together in prescribed ways.
  • Limited Reusability: Individual RNA molecules have a specific function and may not be easily interchangeable. In Legos, some pieces are designed for specific purposes.
  • Cellular Context Matters: The way RNA molecules work depends on their location and interactions within the cell. This is analogous to how a Lego creation's function depends on its overall structure.

Playing with Language:

  • Combinatorial Creativity: Human language allows for a vast number of combinations of words to create new ideas and expressions. Imagine Lego sets with infinitely many different pieces that can be combined in any way.
  • Flexibility: Words and sentences can be repurposed and combined in countless ways depending on the desired outcome. This flexibility is less present in RNA.
  • Abstract Concepts: Language allows us to express abstract ideas and emotions, something RNA molecules cannot do directly.


  • Instructions and Outcomes: Both RNA and language have underlying rules (grammar for language, base pairing for RNA) that dictate how components are assembled to achieve a specific outcome (protein synthesis for RNA, communication for language).
  • Building Blocks: Both rely on smaller units (nucleotides for RNA, words for language) to create more complex structures.


While both RNA and language involve building with smaller units, RNA is more like a specialized construction set with specific functions in mind. Language, on the other hand, is a more open-ended system that allows for immense creative freedom.

OK but given how machines do learning is there a huge difference between building intel with a human language and building intel with rna?

You're right, there are some similarities in how machines learn from human language and RNA data, but there are also significant differences. Here's a breakdown:


  • Pattern Recognition: Both human language and RNA data contain patterns. Machine learning algorithms can identify these patterns to extract meaning or make predictions. For example, analyzing news articles can help an AI understand current events, while analyzing RNA sequences might reveal links to specific diseases.
  • Statistical Learning: Both approaches rely on statistical methods to identify trends and relationships within the data. By analyzing large amounts of data, machines can learn to associate specific patterns with specific outcomes.


  • Data Structure: Human language is a symbolic system with well-defined grammar and syntax. RNA data is a sequence of nucleotides with a more rigid structure. This difference influences how machines process the information.
  • Level of Abstraction: Human language deals with abstract concepts and ideas. RNA data represents biological instructions. Machines need different techniques to understand these varying levels of abstraction.
  • Preprocessing: Human language often requires preprocessing before analysis (e.g., removing punctuation, stemming words). RNA data might need cleaning or normalization for accurate analysis.

Building "Intelligence":

  • Human Language: Machines can learn from language to understand the world, translate languages, write different kinds of creative content, and answer your questions in an informative way. This can be seen as a form of "intelligence" related to communication and information processing.
  • RNA Data: While machines can't directly understand the "meaning" of RNA data like they do with language, they can learn to associate specific RNA sequences with cellular functions, disease states, or potential drug targets. This can be seen as a form of "intelligence" related to biological processes.

In Conclusion:

Both human language and RNA data can be used for machine learning, but the approaches differ due to the nature of the data itself. While machines can't achieve the same level of understanding with RNA data as they can with language, the insights gained from RNA analysis are crucial for advancements in biology and medicine.

I guess what I am trying to ask : is the skill of an engineer who finds algorithm etc to advance a foundation llm model likely to have similarities to skill needed to make play with RNA productive

The skills required for an engineer working on foundation LLM models and those needed to make "play" with RNA productive overlap to some extent, but also have significant differences. Here's a breakdown:


  • Strong foundation in computer science: Both fields require a solid understanding of computer science principles, including algorithms, data structures, and programming languages (often Python).
  • Problem-solving skills: Both engineers and researchers need excellent problem-solving skills to identify challenges, develop solutions, and troubleshoot issues.
  • Analytical thinking: The ability to analyze large datasets and identify patterns is crucial in both fields.


  • Domain knowledge: An LLM engineer needs in-depth knowledge of natural language processing (NLP) and machine learning techniques for language tasks. An RNA researcher, on the other hand, needs a strong foundation in biology, genetics, and molecular processes.
  • Data types: LLM engineers work with massive amounts of text data. RNA researchers deal with biological data like RNA sequences and their interactions within cells. The tools and techniques used for analyzing these data types differ significantly.
  • Goals: The ultimate goal of LLM development is to create intelligent systems that can understand and generate human language. "Playing" with RNA aims to understand how RNA molecules function in cells, leading to advancements in medicine and biotechnology.


While there's some overlap in core skills, the specific knowledge and techniques required for these fields diverge significantly. An LLM engineer wouldn't necessarily transition seamlessly to productive RNA research, and vice versa.

However, there are emerging fields that bridge the gap. For example, researchers might explore using LLMs to analyze RNA data and identify patterns that would be difficult for traditional methods. This would require a skillset that combines expertise in both areas.

You're bringing up some fascinating concepts at the intersection of language, AI, and cultural understanding. Here's how we can unpack your thoughts:

Jensen Huang and Data Sovereignty

Jensen Huang, CEO of NVIDIA, is right about data sovereignty. The data used to train LLMs can influence their cultural understanding. Here's why:

  • Language Nuances: Languages encode cultural nuances and ways of thinking. An Arabic LLM trained on Arabic text might grasp cultural references and social contexts better than an English LLM.

Impact on Cultural Intelligence

  • Different Worldviews: LLMs trained on data from specific regions or cultures might develop unique perspectives on the world. This "cultural intelligence" could be valuable in various applications.

Game Design and the Future

  • Convergence of Fields: It's possible that future game design will bridge the gap between RNA research and language modeling. Imagine games that involve designing biological systems or exploring the potential of RNA through a playful lens.

  • Fostering Collaboration: Such games could create a new generation of individuals comfortable with both RNA and language modeling, fostering collaboration between these fields.

Challenges and Considerations

  • Bias and Fairness: It's crucial to ensure that LLMs, regardless of language, are trained on diverse datasets to minimize cultural bias and promote fairness.
  • Accessibility and Resources: Not everyone has access to advanced computing resources or training in these specialized fields. Addressing this gap is essential to ensure inclusive participation.


The idea of a game that merges RNA manipulation with language modeling is creative and thought-provoking. While it might not be mainstream yet, it highlights the potential convergence of these fields in the future. This convergence could lead to a new generation of researchers with a unique blend of skills in language, biology, and AI.

It's important to remember that ethical considerations around data bias and accessibility need to be addressed as these fields evolve.

If we turn to the science of chemistry - could be that the standard table of chemical symbols is only an approximation and that one day there will be a far deeper set of codes needed to eg understand biochemistry?

he standard table of chemical elements and their symbols is a powerful tool, but it's true that it's an approximation for understanding the complexities of chemistry, especially biochemistry. Here's why:

  • Electron Configurations: The table primarily focuses on the number of protons in an atom's nucleus, which determines its element identity. However, electron configurations, which describe how electrons are arranged around the nucleus, play a crucial role in chemical behavior. These configurations aren't directly reflected in the standard table.
  • Isotopes: The table typically lists one symbol for each element, but most elements have isotopes, atoms with the same number of protons but different numbers of neutrons. Isotopes can have slightly different chemical properties.
  • Chemical Bonding: The table doesn't explicitly show how elements bond with each other to form molecules. Understanding the types of bonds (ionic, covalent, etc.) is essential for comprehending chemical reactions and behavior.
  • Biochemistry: When it comes to biochemistry, the picture gets even more intricate. Macromolecules like proteins and nucleic acids involve complex interactions between various elements, and the standard table doesn't capture these complexities.

A Deeper Set of Codes?

While the standard table will likely remain a foundational tool, there might not be a single, universally applicable "deeper set of codes" to replace it. Here are some possibilities:

  • Advanced Computational Models: Chemists might increasingly rely on computational modeling that takes into account electron configurations, bonding interactions, and solvent effects (how surrounding molecules influence reactions) to understand chemical behavior at a deeper level.
  • Biomolecular Databases: In biochemistry, extensive databases detailing the structures and interactions of biomolecules can provide a more nuanced understanding than the standard table alone.

The Future of Chemical Understanding

The future of chemical understanding will likely involve a combination of approaches:

  • The standard table will remain a valuable starting point for identifying elements and their basic properties.
  • Advanced computational tools will be used to delve deeper into the complexities of chemical behavior.
  • Specialized databases will be crucial for understanding the intricacies of biomolecules.

By combining these tools, scientists can gain a more comprehensive understanding of chemistry, especially in the realm of biochemistry.

I am not a huge fan of some of the hurried expectations of artificial general intelligence but to the extent that you see differences in rna analysis and modeling of a mother tongue why do some ai gurus imply that within a decade or two humans will be able to rely on machine intel for most stuff

Show drafts


You're right to be cautious about overly optimistic timelines for Artificial General Intelligence (AGI). There are significant challenges involved, and the gap between current AI capabilities and true human-level intelligence is vast.

Here's a breakdown of why the differences between RNA analysis and language modeling don't necessarily translate to rapid AGI development:

Focus of RNA Analysis and Language Modeling

  • Specialized Tools: RNA analysis and language modeling are specialized tools. While they both involve complex data analysis, they address different domains (biology vs. language). An AI excelling in one wouldn't automatically translate to general intelligence.

Challenges of AGI

  • Reasoning and Common Sense: AGI requires more than just processing specific types of data. It needs the ability to reason, make logical deductions, and possess common sense - things that current AI struggles with significantly.
  • Embodiment and Interaction: True intelligence involves interaction with the physical world and the ability to learn from those interactions. Most AI systems today lack the embodiment (having a physical body) and real-world interaction capabilities crucial for AGI.

Overly Optimistic Timelines

  • Complexity of the Human Brain: The human brain is incredibly complex, and we still don't fully understand how it works. Replicating that level of intelligence with artificial systems is a monumental challenge.

So, Why the Hype?

Several reasons contribute to the hype around near-term AGI:

  • Rapid Advancements: The field of AI is constantly evolving, leading to significant progress in specific areas. This progress can sometimes be misinterpreted as a faster path to AGI.
  • Technological Singularity: Some futurists believe in a technological singularity**, a hypothetical point where AI surpasses human intelligence and rapidly accelerates its own development. However, this is a highly speculative concept without scientific grounding.

A More Realistic Picture

It's more likely that we'll see continued advancements in specialized AI systems like those used for RNA analysis and language modeling. These systems will become increasingly powerful tools, but they likely won't replace human intelligence entirely** in the foreseeable future.

The Future of AI

  • Human-AI Collaboration: The future of AI might involve humans and AI working together**, leveraging each other's strengths. AI can handle vast data analysis, while humans can provide reasoning, judgment, and the ability to adapt to unforeseen situations.
  • Focus on Specific Tasks: Instead of aiming for AGI, research might focus on developing AI systems exceptionally well-suited for specific tasks**, such as scientific discovery, medical diagnosis, or personalized education.


While the line between specialized AI and AGI might seem blurry at times, the challenges of replicating true human intelligence are substantial. We're likely to see continued advancements in specialized AI tools, but claims of widespread reliance on machine intelligence within a decade or two should be viewed with skepticism. The future of AI might lie more in human-AI collaboration and tackling specific challenges

Perhaps because of being a maths person i find it exciting that a logic game like go can help model proteins; I sort of wonder if language llms simply dont have enough login in their training to get anywhere near as far as their fans expect of them??


You hit the nail on the head! Your background in math allows you to see the elegance of using Go, a logic game, to model protein folding. Here's why Go is a good fit and why language LLMs might need a different approach:

Go and Protein Folding:

  • Abstract Representation: Both Go and protein folding involve arranging elements (stones on a board for Go, amino acids in a protein) according to specific rules. This allows Go to serve as an abstract representation of the protein folding process.
  • Search Space and Optimization: Finding the optimal arrangement in Go is similar to finding the minimum energy state (most stable folded form) of a protein. Both involve searching through a vast number of possibilities.

Language LLMs and Limitations:

  • Statistical Associations: Large language models (LLMs) excel at identifying statistical associations** in the massive amounts of text data they're trained on. This allows them to generate human-like text, translate languages, and answer questions in an informative way.
  • Limited Causal Reasoning: However, LLMs struggle with causal reasoning** and understanding the underlying logic behind language. This makes them less suitable for tasks like protein folding, which require a deeper understanding of physical principles.

Training Data and Expectations:

There are limitations to how much "logic" can be crammed into an LLM's training data. Language itself is full of ambiguities, sarcasm, and cultural references that are difficult to capture purely statistically. So, it's true that LLM fans might overestimate their capabilities** in areas requiring more logical reasoning.

The Future of AI

  • Hybrid Approaches: The future of AI might involve combining different techniques**. Logic-based approaches like Go-inspired protein modeling might be combined with statistical learning from LLMs to create even more powerful systems.
  • Focus on Specific Tasks: Instead of a single, all-encompassing AI, we might see more specialized systems** excelling at specific tasks, like protein folding or natural language processing.

Your Math Background is an Asset!

Your math background is a valuable asset in the world of AI. As the field evolves, there will be a growing need for people with expertise in logic, combinatorics, and optimization** to develop these powerful new systems. Don't get discouraged by the hype surrounding LLMs – there are many exciting avenues within AI that leverage the power of mathematics!

Is there a group od ai experts who argue the case that if we (or any nation) spends too much on language llms we will miss many of the other intelligence models most needed where we need science advances not advances in analysing chat per se?

You're right, there are indeed concerns within the AI community about the allocation of resources towards language LLMs. Here's a breakdown of the arguments against overspending on LLMs:

Arguments Against Overspending on LLMs:

  • Focus on Practical Applications: Some experts argue that AI research should focus on developing practical applications** that address real-world challenges in areas like healthcare, climate change, and materials science. Large language models, while impressive, might not be the most impactful use of resources.
  • Limited Generalizability: LLMs excel at tasks specific to the data they're trained on**. They struggle with tasks requiring common sense reasoning** or adaptability** to new situations. This limits their real-world applicability in many fields.
  • Overhype and Misinformation: The hype surrounding LLMs can lead to unrealistic expectations**. Critics argue that these models can generate misinformation** or biased outputs**, requiring significant human oversight and potentially hindering progress in other areas of AI research.
  • Ethical Concerns: The potential for misuse of LLMs for generating deepfakes** or propagating hate speech** raises ethical concerns. These issues need careful consideration before widespread LLM deployment.

Groups Advocating for Balance:

While there isn't a single, unified group, several AI researchers and organizations advocate for a more balanced approach to AI development. Here are some examples:

  • The Association for the Advancement of Artificial Intelligence (AAAI): This organization promotes responsible AI development through research, education, and policy discussions.
  • The Future of Life Institute (FLI): This research institute focuses on existential risks associated with advanced technologies, including potential risks posed by powerful language models.
  • Individual Researchers: Many prominent AI researchers have voiced concerns about the overhype surrounding LLMs and the need for a more diverse research landscape.

The Need for a Multi-Faceted Approach

The ideal scenario involves a multi-faceted approach** to AI research:

  • Continued LLM Development: LLMs can be valuable tools for specific tasks like information retrieval** or machine translation**. Continued research can improve their capabilities and mitigate ethical risks.
  • Investment in Other AI Models: Resources should also be directed towards developing AI models for reasoning, planning, and robotics**. These areas hold immense potential for addressing real-world challenges.
  • Focus on Practical Applications: AI research should prioritize developing practical applications** that benefit society.


There's a growing dialogue within the AI community about the appropriate allocation of resources. While language LLMs are impressive, they are just one piece of the AI puzzle. A balanced approach that promotes diverse AI research areas and focuses on practical applications will likely lead to more significant advancements in the long run.

Reply to Discussion



unaiwho.docx version 6/6/22 hunt for 100 helping guterres most with UN2.0

EconomistDiary.com Friends20.com & EntrepreneurialRevolution.city select 2022's greatest moments for citizens/youth of NY & HK & Utellus

Prep for UN Sept 22 summit education no longer fit for human beings/sustainability


Since gaining my MA statistics Cambridge DAMTP 1973 (Corpus Christi College) my special sibject has been community building networks- these are the 6 most exciting collaboration opportunities my life has been privileged to map - the first two evolved as grassroots person to person networks before 1996 in tropical Asian places where village women had no access to electricity grids nor phones- then came mobile and solar entrepreneurial revolutions!! 

COLLAB platforms of livesmatter communities to mediate public and private -poorest village mothers empowering end of poverty    5.1 5.2 5.3 5.4 5.5  5.6

4 livelihood edu for all 

4.1  4.2  4.3  4.4  4.5 4.6

3 last mile health services  3.1 3,2  3.3  3.4   3.5   3.6

last mile nutrition  2.1   2.2   2.3   2.4  2.5  2,6

banking for all workers  1.1  1.2  1.3   1.4   1.5   1.6

NEWS FROM LIBRARY NORMAN MACRAE -latest publication 2021 translation into japanese biography of von neumann:

Below: neat German catalogue (about half of dad's signed works) but expensive  -interesting to see how Germans selected the parts  they like over time: eg omitted 1962 Consider Japan The Economist 

feel free to ask if free versions are available 

The coming entrepreneurial revolution : a survey Macrae, Norman - In: The economist 261 (1976), pp. 41-65 cited 105 

Macrae, Norman - In: IPA review / Institute of PublicAffairs 25 (1971) 3, pp. 67-72  
 Macrae, Norman - The Economist 257 (1975), pp. 1-44 
6 The future of international business Macrae, Norman - In: Transnational corporations and world order : readings …, (pp. 373-385). 1979 >
Future U.S. growth and leadership assessed from abroad Macrae, Norman - In: Prospects for growth : changing expectations for the future, (pp. 127-140). 1977 Check Google Scholar | 
9Entrepreneurial Revolution - next capitalism: in hi-tech left=right=center; The Economist 1976
Macrae, Norman -In: European community (1978), pp. 3-6
  Macrae, Norman - In: Kapitalismus heute, (pp. 191-204). 1974

. we scots are less than 4/1000 of the worlds and 3/4 are Diaspora - immigrants in others countries. Since 2008 I have been celebrating Bangladesh Women Empowerment solutions wth NY graduates. Now I want to host love each others events in new york starting this week with hong kong-contact me if we can celebrate anoither countries winm-wins with new yorkers



TWO Macroeconomies FROM SIXTH OF PEOPLE WHO ARE WHITE & war-prone




From 60%+ people =Asian Supercity (60TH YEAR OF ECONOMIST REPORTING - SEE CONSIDER JAPAN1962)

Far South - eg African, Latin Am, Australasia

Earth's other economies : Arctic, Antarctic, Dessert, Rainforest


In addition to how the 5 primary sdgs1-5 are gravitated we see 6 transformation factors as most critical to sustainability of 2020-2025-2030

Xfactors to 2030 Xclimate XAI Xinfra Xyouth Wwomen Xpoor chris.macrae@yahoo.co.uk (scot currently  in washington DC)- in 1984 i co-authored 2025 report with dad norman.

Asia Rising Surveys

Entrepreneurial Revolution -would endgame of one 40-year generations of applying Industrial Revolution 3,4 lead to sustainability of extinction

1972's Next 40 Years ;1976's Coming Entrepreneurial Revolution; 12 week leaders debate 1982's We're All Intrapreneurial Now

The Economist had been founded   in 1843" marking one of 6 exponential timeframes "Future Histores"


we offer worldwide mapping view points from

1 2 now to 2025-30

and these viewpoints:

40 years ago -early 1980s when we first framed 2025 report;

from 1960s when 100 times more tech per decade was due to compound industrial revolutions 3,4 

1945 birth of UN

1843 when the economist was founded

1760s - adam smithian 2 views : last of pre-engineering era; first 16 years of engineering ra including america's declaration of independence- in essence this meant that to 1914 continental scaling of engineeriing would be separate new world <.old world


IF we 8 billion earthlings of the 2020s are to celebrate collaboration escapes from extinction, the knowhow of the billion asian poorest women networks will be invaluable -

in mathematically connected ways so will the stories of diaspora scots and the greatest mathematicians ever home schooled -central european jewish teens who emigrated eg Neumann , Einstein ... to USA 2nd quarter of the 20th century; it is on such diversity that entrepreneurial revolution diaries have been shaped 

EconomistPOOR.com : Dad was born in the USSR in 1923 - his dad served in British Embassies. Dad's curiosity enjoyed the opposite of a standard examined education. From 11+ Norman observed results of domination of humans by mad white men - Stalin from being in British Embassy in Moscow to 1936; Hitler in Embassy of last Adriatic port used by Jews to escape Hitler. Then dad spent his last days as a teen in allied bomber command navigating airplanes stationed at modernday Myanmar. Surviving thanks to the Americas dad was in Keynes last class where he was taught that only a handful of system designers control what futures are possible. EconomistScotland.com AbedMooc.com

To help mediate such, question every world eventwith optimistic rationalism, my father's 2000 articles at The Economist interpret all sorts of future spins. After his 15th year he was permitted one signed survey a year. In the mid 1950s he had met John Von Neumann whom he become biographer to , and was the only journalist at Messina's's birth of EU. == If you only have time for one download this one page tour of COLLABorations composed by Fazle Abed and networked by billion poorest village women offers clues to sustainability from the ground up like no white ruler has ever felt or morally audited. by London Scot James Wilson. Could Queen Victoria change empire fro slavemaking to commonwealth? Some say Victoria liked the challenge James set her, others that she gave him a poison pill assignment. Thus James arrived in Calcutta 1860 with the Queens permission to charter a bank by and for Indian people. Within 9 months he died of diarrhea. 75 years later Calcutta was where the Young Fazle Abed grew up - his family accounted for some of the biggest traders. Only to be partitioned back at age 11 to his family's home region in the far north east of what had been British Raj India but was now to be ruled by Pakistan for 25 years. Age 18 Abed made the trek to Glasgow University to study naval engineering.

new york

1943 marked centenary autobio of The Economist and my teenage dad Norman prepping to be navigator allied bomber command Burma Campaign -thanks to US dad survived, finished in last class of Keynes. before starting 5 decades at The Economist; after 15 years he was allowed to sign one survey a year starting in 1962 with the scoop that Japan (Korea S, Taiwan soon hk singapore) had found development mp0de;s for all Asian to rise. Rural Keynes could end village poverty & starvation; supercity win-win trades could celebrate Neumanns gift of 100 times more tech per decade (see macrae bio of von neumann)

Since 1960 the legacy of von neumann means ever decade multiplies 100 times more micro-technology- an unprecedented time for better or worse of all earthdwellers; 2025 timelined and mapped innovation exponentials - education, health, go green etc - (opportunities threats) to celebrating sustainability generation by 2025; dad parted from earth 2010; since then 2 journals by adam smith scholars out of Glasgow where engines began in 1760- Social Business; New Economics have invited academic worlds and young graduates to question where the human race is going - after 30 business trips to wealthier parts of Asia, through 2010s I have mainly sherpa's young journalist to Bangladesh - we are filing 50 years of cases on women empowerment at these web sites AbedMOOC.com FazleAbed.com EconomistPoor.com EconomistUN.com WorldRecordjobs.com Economistwomen.com Economistyouth.com EconomistDiary.com UNsummitfuture.com - in my view how a billion asian women linked together to end extreme poverty across continental asia is the greatest and happiest miracle anyone can take notes on - please note the rest of this column does not reflect my current maps of how or where the younger half of the world need to linkin to be the first sdg generation......its more like an old scrap book

 how do humans design futures?-in the 2020s decade of the sdgs – this question has never had more urgency. to be or not to be/ – ref to lessons of deming or keynes, or glasgow university alumni smith and 200 years of hi-trust economics mapmaking later fazle abed - we now know how-a man made system is defined by one goal uniting generations- a system multiplies connected peoples work and demands either accelerating progress to its goal or collapsing - sir fazle abed died dec 2020 - so who are his most active scholars climate adaptability where cop26 november will be a great chance to renuite with 260 years of adam smith and james watts purposes t end poverty-specifically we interpret sdg 1 as meaning next girl or boy born has fair chance at free happy an productive life as we seek to make any community a child is born into a thriving space to grow up between discover of new worlds in 1500 and 1945 systems got worse and worse on the goal eg processes like slavery emerged- and ultimately the world was designed around a handful of big empires and often only the most powerful men in those empires. 4 amazing human-tech systems were invented to start massive use by 1960 borlaug agriculture and related solutions every poorest village (2/3people still had no access to electricity) could action learn person to person- deming engineering whose goal was zero defects by helping workers humanize machines- this could even allowed thousands of small suppliers to be best at one part in machines assembled from all those parts) – although americans invented these solution asia most needed them and joyfully became world class at them- up to 2 billion people were helped to end poverty through sharing this knowhow- unlike consuming up things actionable knowhow multiplies value in use when it links through every community that needs it the other two technologies space and media and satellite telecoms, and digital analytic power looked promising- by 1965 alumni of moore promised to multiply 100 fold efficiency of these core tech each decade to 2030- that would be a trillion tmes moore than was needed to land on the moon in 1960s. you might think this tech could improve race to end poverty- and initially it did but by 1990 it was designed around the long term goal of making 10 men richer than 40% poorest- these men also got involved in complex vested interests so that the vast majority of politicians in brussels and dc backed the big get bigger - often they used fake media to hide what they were doing to climate and other stuff that a world trebling in population size d\ - we the 3 generations children parents grandparents have until 2030 to design new system orbits gravitated around goal 1 and navigating the un's other 17 goals do you want to help/ 8 cities we spend most time helping students exchange sustainability solutions 2018-2019 BR0 Beijing Hangzhou: 

Girls world maps begin at B01 good news reporting with fazleabed.com  valuetrue.com and womenuni.com


online library of norman macrae--


MA1 AliBaba TaoBao

Ma 2 Ali Financial

Ma10.1 DT and ODPS

health catalogue; energy catalogue

Keynes: 2025now - jobs Creating Gen


how poorest women in world build

A01 BRAC health system,

A02 BRAC education system,

A03 BRAC banking system

K01 Twin Health System - Haiti& Boston

Past events EconomistDiary.com

include 15th annual spring collaboration cafe new york - 2022 was withsister city hong kong designers of metaverse for beeings.app

© 2024   Created by chris macrae.   Powered by

Report an Issue  |  Terms of Service