James Zou
Palo Alto, California, United States
19K followers
500+ connections
View mutual connections with James
James can introduce you to 10+ people at Stanford University
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with James
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
James Zou is an associate professor of Biomedical Data Science and, by courtesy, CS and…
Activity
19K followers
-
James Zou shared thisNice article in The Stanford Daily today profiling our AI for drug discovery research https://lnkd.in/gUuN9ySRStanford researchers develop AI scientists for therapeutic discoveryStanford researchers develop AI scientists for therapeutic discovery
-
James Zou reposted thisStanford Institute for Human-Centered Artificial Intelligence (HAI)
Stanford Institute for Human-Centered Artificial Intelligence (HAI)
2dJames Zou reposted thisToday's AI+Science conference brought together leading minds to explore how AI is transforming scientific discovery across every discipline. Our afternoon sessions took us from AI in fundamental science to the role of human understanding in the future of scientific discovery. Darío Gil, the Under Secretary for Science at the U.S. Department of Energy, delivered our afternoon keynote, highlighting the vital connection between technological innovation and public policy in addressing our most pressing challenges. Leading scientists and researchers also explored how AI is revolutionizing our approach to the universe's deepest questions – from mathematics to physics to astrophysics. And lastly, our interdisciplinary panel tackled perhaps the most profound question of the day: What is the role of human understanding in this new era of AI-powered scientific discovery? The conversation affirmed that while AI is transforming how we conduct science, the human elements of curiosity, creativity, interpretation, and meaning-making remain at the heart of scientific endeavor. Thank you to everyone who joined us today! The formal program is done, but the conversation continues with John Hennessy, James Landay, and Fei-Fei Li in a fireside chat reflecting on today's exciting news and discussions on the future of AI in research. Read more about their perspectives and why Stanford is restructuring for AI’s next era: https://lnkd.in/gwPnrvwX -
James Zou shared thisThrilled to participate in the Stanford Institute for Human-Centered Artificial Intelligence (HAI) AI + Science Conference!Stanford Institute for Human-Centered Artificial Intelligence (HAI)
Stanford Institute for Human-Centered Artificial Intelligence (HAI)
2dJames Zou shared thisToday's AI+Science conference brought together leading minds to explore how AI is transforming scientific discovery across every discipline. Our afternoon sessions took us from AI in fundamental science to the role of human understanding in the future of scientific discovery. Darío Gil, the Under Secretary for Science at the U.S. Department of Energy, delivered our afternoon keynote, highlighting the vital connection between technological innovation and public policy in addressing our most pressing challenges. Leading scientists and researchers also explored how AI is revolutionizing our approach to the universe's deepest questions – from mathematics to physics to astrophysics. And lastly, our interdisciplinary panel tackled perhaps the most profound question of the day: What is the role of human understanding in this new era of AI-powered scientific discovery? The conversation affirmed that while AI is transforming how we conduct science, the human elements of curiosity, creativity, interpretation, and meaning-making remain at the heart of scientific endeavor. Thank you to everyone who joined us today! The formal program is done, but the conversation continues with John Hennessy, James Landay, and Fei-Fei Li in a fireside chat reflecting on today's exciting news and discussions on the future of AI in research. Read more about their perspectives and why Stanford is restructuring for AI’s next era: https://lnkd.in/gwPnrvwX -
James Zou reposted thisJames Zou reposted thisExcited to share that our paper DSGym has been accepted to #ICML2026! Grateful to be part of this project and huge thanks to my amazing co-authors and the entire team❤️
-
James Zou reposted thisJames Zou reposted thisWe're partnering with InVision Medical Technology to turn filed-away echocardiograms into the cardiac data that clinical development has been missing. Every echocardiogram records enough raw data to produce dozens of quantitative measurements. But physicians only measure what that patient needs in that moment. The rest gets filed away. Hundreds of additional data points live in those files. Through our Clinical AI Marketplace, InVision's echocardiography AI processes the raw echo data to generate up to 30x more measurements per patient — strain, filling pressures, chamber volumes, wall thickness — all standardized using a single methodology regardless of site or time period. Layered onto Dandelion's decade of longitudinal patient records, this gives life sciences companies a new way to phenotype heart failure populations, measure how therapies change the heart, and design trials against real cardiac physiology instead of billing codes. Link to the press release in comments.
-
James Zou reposted thisI’m excited to share our work “Multi-Agent Teams Hold Experts Back” was accepted to #ICML2026 🚀 Thank you to wonderful collaborators for making this such a fun project: James Zou Batu El Hancheng Cao Carmelo di Nolfo Yanchao Sun Meng CaoJames Zou reposted thisI'm excited to share my first PhD project: Multi-Agent Teams Hold Experts Back. Many modern multi-agent systems use pre-specified workflows, fixed roles, and aggregation rules. As multi-agent teams handle increasingly complex tasks, what happens when we can't specify optimal workflows ahead of time? As the title suggests, we find that multi-agent teams with differential expertise consistently underperform their best member. Even when told explicitly which model on the team is the expert, teams struggle to properly leverage their expertise. Our analysis suggests this is due to non-expert agents prioritizing consensus-building over correctness. We hypothesize this may be an unintended side effect of alignment-oriented post-training. We also find that this failure mode worsens with team size -- larger teams wash out signal from expert agents. The takeaway? Multi-agent teams aren't yet capable of self-organizing to achieve performance better than their best agent. As frontier models proliferate, each with unique comparative advantages, teams will need to properly leverage expertise to actually outperform what any single model alone can accomplish. I am very excited by this new direction of building multi-agent systems that can self-organize without human specification/supervision. Huge thank you to wonderful collaborators: James Zou Batu El Hancheng Cao Carmelo di Nolfo Yanchao Sun and Meng Cao! Paper: https://lnkd.in/gDp5Fvnv
-
James Zou reposted thisJames Zou reposted thisWe’re proud to be co-funding three innovative projects with the Stanford Institute for Human-Centered Artificial Intelligence (HAI), who awarded $2.17M to ambitious AI projects this week. In their record-breaking year of seed research grant proposals, 29 teams received grants to further their work. Here are the three projects we’re co-funding with Stanford HAI: “A Human-Centered AI Database for Multi-Organ Point-of-Care Ultrasound (POCUS) Education” from Main PI Andre Kumar, MD, MEd, Stanford School of Medicine (Med/Hospital Medicine) “Early Detection of Pediatric Pneumonia Spikes in Ethiopia Using AI Cloud-Based Data Integration” from Main PI John Openshaw, Stanford School of Medicine (Infectious Diseases and Geographic Medicine) and Co-PI Rishi Mediratta, Stanford School of Medicine (Pediatrics) “Empowering Biomedical Researchers with Microbiome-Specialized AI Agent Co-Pilots” from Main PI Justin Sonnenburg, Stanford School of Medicine (Microbiology and Immunology) and Co-PI James Zou, Stanford School of Medicine (Biomedical Data Science) Learn more about the program and their work: https://lnkd.in/ewvV52vW
-
James Zou reposted thisJames Zou reposted thisBig update to Paperclip 📎: We've added arXiv + 150M abstracts to our corpus! We've been using it internally for a couple of weeks, and I think I've only scratched the surface of what agents can do with it. Read our blog for examples and our agent-satisfaction survey: https://lnkd.in/gURR4CNr
-
James Zou shared thisBig Update 🤩: #paperclip now includes full papers from all of arXiv, PubMed Central, plus 150 million abstracts! 🖇️ You can give your LLM all that knowledge in one line—all optimally indexed for AI agents. Much more thorough and ~100x faster than web search, and free. Quick install: curl -fsSL https://lnkd.in/eZHQS4r3 | bash Blog: https://lnkd.in/eyA6EZuH
-
James Zou liked thisJames Zou liked thisI’m leaving Salesforce AI Research after an incredible ride. Grateful to have worked with such an amazing group of people! For my next adventure, I’m building Spellshot (www.spellshot.ai). The inspiration for Spellshot came from my wife, Abby (https://lnkd.in/gXU7w5ab). She’s a part-time creator and struggled to manually edit videos due to a repetitive strain injury in her hands. Meanwhile, I found myself dictating my workflows to increasingly capable AI agents, and I thought there must be a better way to make and edit videos. So I built Spellshot for her. Now she uses Spellshot for every video. We hope to be able to share some of this magic with the rest of the world. If you (or someone you know) spends a lot of time working on video editing, I’d love to connect.
-
James Zou liked thisStanford Institute for Human-Centered Artificial Intelligence (HAI)
Stanford Institute for Human-Centered Artificial Intelligence (HAI)
2dJames Zou liked thisToday's AI+Science conference brought together leading minds to explore how AI is transforming scientific discovery across every discipline. Our afternoon sessions took us from AI in fundamental science to the role of human understanding in the future of scientific discovery. Darío Gil, the Under Secretary for Science at the U.S. Department of Energy, delivered our afternoon keynote, highlighting the vital connection between technological innovation and public policy in addressing our most pressing challenges. Leading scientists and researchers also explored how AI is revolutionizing our approach to the universe's deepest questions – from mathematics to physics to astrophysics. And lastly, our interdisciplinary panel tackled perhaps the most profound question of the day: What is the role of human understanding in this new era of AI-powered scientific discovery? The conversation affirmed that while AI is transforming how we conduct science, the human elements of curiosity, creativity, interpretation, and meaning-making remain at the heart of scientific endeavor. Thank you to everyone who joined us today! The formal program is done, but the conversation continues with John Hennessy, James Landay, and Fei-Fei Li in a fireside chat reflecting on today's exciting news and discussions on the future of AI in research. Read more about their perspectives and why Stanford is restructuring for AI’s next era: https://lnkd.in/gwPnrvwX -
James Zou liked thisJames Zou liked thisExcited to share that our paper DSGym has been accepted to #ICML2026! Grateful to be part of this project and huge thanks to my amazing co-authors and the entire team❤️
-
James Zou liked thisJames Zou liked thisWe're partnering with InVision Medical Technology to turn filed-away echocardiograms into the cardiac data that clinical development has been missing. Every echocardiogram records enough raw data to produce dozens of quantitative measurements. But physicians only measure what that patient needs in that moment. The rest gets filed away. Hundreds of additional data points live in those files. Through our Clinical AI Marketplace, InVision's echocardiography AI processes the raw echo data to generate up to 30x more measurements per patient — strain, filling pressures, chamber volumes, wall thickness — all standardized using a single methodology regardless of site or time period. Layered onto Dandelion's decade of longitudinal patient records, this gives life sciences companies a new way to phenotype heart failure populations, measure how therapies change the heart, and design trials against real cardiac physiology instead of billing codes. Link to the press release in comments.
Experience
Education
Recommendations received
-
LinkedIn User
“I've had the amazing good fortune of working with Dr. Zou as a SAIL Affifilate. His remarkable work on Data Shapley, Mulitiaccuracy, neon-shapley and cPCA have resonated very well across our multiple client base. It is indeed a blessing to know someone who has focussed his research work around commercially viable ideas that make a tangible difference - be it in terms of bias removal or model performance. I continue to learn from him. As an Industry affiliate, coach and mentor - you will not find a better bet than James!”
1 person has recommended James
Join now to viewView James’ full profile
-
See who you know in common
-
Get introduced
-
Contact James directly
Other similar profiles
Explore more posts
-
Euan Ashley
Stanford University • 20K followers
We report new work from our group in collaboration with Google's genomics team. A challenge with genetic discovery for heart failure is that its classification within large scale biobanks is extremely vague. This has led to the surprising situation where, in 2026, there are only 2 validated GWAS loci for heart failure with preserved ejection fraction, a sub-group that accounts for HALF of the total heart failure patients, heart failure being the single most common admitting diagnosis in patients over the age of 65 (more than 30 million people globally). We need to do better. One approach is to use machine learning to develop a precision phenotype (for this you need rich health care system data from many patients, not biobank data). Then, you can take that probabilistic framework and apply it to biobanks which, while less enriched for patients, provide the power of large scale and deep genomics and proteomics. This was the idea that Jack W O'Sullivan MD, PhD and a large collaborating team sought to pursue in this work. The findings went beyond all our expectations. First, the team increased the number of loci confidently associated with HFpEF almost 50 fold! We report 99 loci, up from 2. Second, by integrating multi-omics in a causal inference framework, we prioritize gene-protein pathways as candidates for therapeutic development while de-prioritizing non-causal targets, reclassifying them as biomarkers. How could we tell we were on track? We noted first that three clinical trials for one of our deprioritized targets (MPO) were negative (genomics really can save money in drug discovery). We also validated one of our prioritized targets in a novel clinical study. In conclusion, we hope this work accelerates our understanding of heart failure. We believe this approach to precision phenotyping can extend the reach of our global biobanks in many directions. Finally, we believe incorporating causal inference in a multiomic framework can directly inform investment decisions in drug discovery pipelines. Congratulations to the entire team and special thanks to our Google collaborators who are the co-leaders of this work. Taedong (Ted) Yun Cory McLean Andrew Carroll Paper: https://lnkd.in/gamstJQM
532
17 Comments -
Jinfeng Zhang
Insilicom LLC • 8K followers
🚨 Milestone for AI research: DeepSeek-R1 featured on the cover of Nature On September 17, DeepSeek’s work on reinforcement learning for reasoning in LLMs became the first large language model ever to pass independent peer review and be published in Nature. Why this matters: 🔹 Scientific breakthrough – DeepSeek-R1 shows that reinforcement learning without human-annotated reasoning traces can trigger the emergence of advanced reasoning behaviors like reflection, verification, and strategy adaptation. 🔹 Performance – DeepSeek-R1-Zero surpassed human competitors in the AIME 2024 math Olympiad benchmark, and excelled in coding and STEM tasks. 🔹 Open science – Alongside the paper, the team released smaller distilled models, enabling the community to study and apply these reasoning methods. 🔹 Transparency & rigor – The Nature publication includes a 64-page peer review file and a comprehensive safety report. DeepSeek directly addressed concerns about data contamination and “distillation,” detailing decontamination steps and clarifying that reasoning abilities were learned independently, not borrowed from other models. 🔹 Broader impact – With >10 million downloads on Hugging Face, DeepSeek-R1 has become the most widely adopted open-source reasoning model. Its combination of open release + peer review sets a new bar for transparency and reproducibility in AI research. As Nature noted in its editorial: in an industry often dominated by unverified claims, DeepSeek has taken a remarkable step toward transparency and credibility. 👉 Full paper (open access): https://lnkd.in/eVSH6pT5 👉 Peer review file: https://lnkd.in/eTV2nhPF
21
-
NYU Center for Data Science
15K followers
Yubei Chen, fresh from a CDS postdoc under CDS founding director Yann LeCun and Meta FAIR, brought insights from his computational‑neuroscience and self‑supervised learning research at UC Berkeley into founding Aizip. Chen drew on techniques originally used for visualizing word embeddings and concept neurons to shape Aizip’s focus on tiny, interpretable models designed for edge‑device deployment. Aizip’s AI nanofactory and Gizmo model line (300M–2B parameters) enable on‑device tasks like face recognition, ECG/EEG analysis, keyword spotting, and chatbots—all without the cloud, reducing power and latency. Chen noted that most AI innovation was pushing model scale, but real‑world use cases often demand opposite constraints: “high efficiency, low power consumption, and minimal latency.” By optimizing algorithms for resource‑constrained hardware, Aizip is enabling scientific rigor to meet deployment needs. Read the full interview: https://lnkd.in/eBrtb9cS #EdgeAI #TinyML #OnDeviceAI #AIInterpretability
65
-
Nishantha Ruwan
IWROBOTX Software Inc. • 2K followers
The study introduces GPN-Star, a novel genomic language model (gLM) designed to learn functional constraints in DNA sequences across evolutionary timescales by explicitly incorporating phylogenetic information and whole-genome alignments into its architecture. Traditional gLMs adapted from natural language processing often demand enormous model size and computation yet underperform compared to classical evolutionary models, especially in complex genomes such as humans. GPN-Star overcomes these limitations by leveraging species relationships through phylogeny-aware mechanisms and training on whole-genome alignments spanning vertebrate, mammalian, and primate divergences. When benchmarked on a wide range of variant effect prediction tasks, GPN-Star achieves state-of-the-art performance in both coding and non-coding regions, outperforming previous models in prioritizing pathogenic variants, enriching complex trait heritability, and increasing power in rare variant association tests. The framework also generalizes to other organisms — including mouse, chicken, fruit fly, nematode, and Arabidopsis — demonstrating robustness and broad applicability. Overall, GPN-Star presents a scalable, flexible tool for genome interpretation that efficiently integrates evolutionary signals and modern deep learning to improve the functional annotation of genetic variation. Read: https://lnkd.in/g2Driw6c
1
-
Claire Yo
Science Exploration Press • 381 followers
🔬 New Open-Access Article in Computational Biomedicine ❗ Drug‑target affinity prediction based on multi-source information and graph convolutional network Lei X, Tang X, Zhang Y. Computational Biomedicine 2026;1:202520 🔗 Read here: https://lnkd.in/geup-vpd 💡 This study introduces a novel deep learning framework for drug–target affinity (DTA) prediction, integrating graph convolutional networks with multi-source molecular and protein features. Key innovations include: ✨ Multi-source feature integration – combining molecular graphs, protein structures, and biological descriptors for richer representation. ✨ Graph neural network-based modeling – using GCN and graph isomorphism networks to capture local and global interactions. ✨ Enhanced predictive performance – significant improvements on benchmark datasets (Davis & KIBA), demonstrating superior accuracy and reliability. Corresponding author: Prof. Xiujuan Lei, Shaanxi Normal University 💡 If your research focuses on AI-driven drug discovery, computational pharmacology, or graph-based bioinformatics, we invite you to check out this work, share it with your network, and cite it if relevant! 📩 CBM also welcomes submissions of high-quality studies in these areas – join us in advancing computational biomedicine! #CBM #DrugTargetAffinity #GraphNeuralNetworks #ComputationalDrugDiscovery #AIinBiomedicine #Bioinformatics
12
2 Comments -
Eric Bonabeau
Decoding Discontinuity • 46K followers
𝗘𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝗶𝘀 𝗱𝗲𝗳𝗶𝗻𝗶𝘁𝗲𝗹𝘆 𝗯𝗮𝗰𝗸, 𝗣𝗮𝗿𝘁 𝗜𝗜 At a recent "Workshop on AI in the Cloud" at UC Berkeley organized by Industry-Academia Partnership (IAP, see https://lnkd.in/gJAqvdyA), Ion Stoica (UC Berkeley Professor and a co-founder of Databricks, Anyscale, LMArena) gave a fantastic, and fantastically clear, overview of the *few* things he has done. Some examples include "Ray, a distributed framework for scaling AI workloads, vLLM and SGLang, two high-throughput inference engines for LLMs, and LMArena, a platform for accurate LLM benchmarking." But what got me is his answer to someone asking for what to focus on in research. His answer: reinforcement learning and 𝗲𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝗮𝗿𝘆 𝗮𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺𝘀! When I asked him about evolution after his talk, he said: (1) "we love them, they work really well" and (2) his team is a big user of OpenEvolve, an open source implementation of Google DeepMind's AlphaEvolve. From Asankhaya Sharma's blog on Hugging Face (https://lnkd.in/gN_aR-eF): "AlphaEvolve represents a significant advancement in this field by: 1. Using LLMs to generate sophisticated code modifications 2. Evaluating these modifications with automated metrics 3. Using an evolutionary framework to improve promising solutions 4. Evolving entire codebases, not just individual functions OpenEvolve implements these principles in a flexible, configurable open-source package" Evolutionary algorithms are having a rebirth and can be combined with LLMs and other forms of 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 models to produce powerful novel concepts, hypotheses or algorithms. Until now, they have not really been deployed at the kind of scale that LLMs and other neural networks have been. I can't wait to see what happens when we get to that scale. Google Deepmind and Sakana AI have been working on exciting applications, a breath a fresh air in a rather stuffy technical monoculture. If Ion Stoica is enthusiastic about them, it is a sign! Joshua Knowles, Danny Hillis, Llion Jones, Thomas Wolf
11
1 Comment -
ASCPT Journal Family
3K followers
🚀 Large language models (LLMs) are rapidly reshaping the landscape of clinical and translational science. The latest Translational Bytes post by James Lu for #CTSjournal explores how these tools can be leveraged for research, communication, and discovery—while also raising important questions about responsible use, transparency, and equity. 📖 Read here: https://bit.ly/3IawfvA Whether you're curious about integrating AI into your workflows or considering the broader implications for the field, this piece offers a thoughtful primer on both the opportunities and challenges ahead. What role do you see LLMs playing in advancing translational science? #AI #LLM #TranslationalScience #ClinicalResearch #Innovation #ASCPTjournals
13
-
Amar Sahay
2K followers
Excited to share our study in Neuron led by Travis Goode, a K99 postdoctoral fellow (on job mkt), other members of Sahay lab & Collaborators @ UW Seattle /Hopkins/UCSF, defining a neural circuit that links experience with feeding. Open Access: Download from https://lnkd.in/dh5wQZv #eatingdisorders #obesity Significance: Remember that great experience you had at that restaurant and how you want to go back. Food consumption or motivation to eat is dependent on internal states (ghrelin, incretins) and external cues or “contexts”. Our environment or context exerts a powerful influence on where we eat and don’t choose to eat. The brain circuits and cell-types underlying how our prior experiences reinforce or calibrate eating at specific locations are poorly understood. Defining such mechanisms may shed light on therapeutics to treat disordered eating in humans such as binge eating that may arise from loss of contextual control or calibration of eating. The hippocampus, a seahorse shaped memory center in the brain, is crucial for forming memories of our experiences. Growing recognition from human imaging studies that hippocampal activity and hippocampal connectivity with feeding centers of the brain like the hypothalamus are altered in individuals who are binge eaters and obese. We show that the hippocampus recruits Prodynorphin producing neurons in the dorsolateral septum (DLS Pdyn) to instruct an evolutionarily conserved feeding circuit module in the lateral hypothalamus to confer experience-dependent calibration of feeding. What we found: Using snRNAseq (29K cells) and in situs we mapped neuropeptidergic cells in the dorsolateral septum and identified a sparse population of DLS Pdyn neurons that is conserved from mice to humans. Using activity-dependent imaging we found that DLS Pdyn neurons exhibit context-dependent activity, with larger changes in activity happening when mice were in a food-reinforced context. Using optogenetic methods to silence DLS Pdyn neurons or their inputs from the hippocampus or prodynorphinergic inputs to the lateral hypothalamus, we found that context-conditioned feeding was impaired. i.e mice ate comparable amounts of food in reinforced context and a novel context. We observed similar outcomes when Pdyn was genetically deleted from these neurons, suggesting that kappa opioid receptors may have a role in contextual regulation of feeding. Implications for PharmacoTx for weight loss and eating disorders: Dysfunction of PDYN production or impairments in physiology or circuitry of these cells may contribute to disordered eating. Our instantiation of a circuit for contextual control of eating refines our circuit-based understanding of weight loss- and eating-targeted pharmacotherapies. Since GLP1R is expressed in DLS Pdyn neurons and stimulation of these cells suppresses feeding and triggers aversion behavior, it maybe that GLP1R agonists exert their effects on food intake, in part, through dysphoria.
194
3 Comments -
Bruce Ratner, PhD
Bruce stands as a… • 23K followers
*** AI Discovers New Biology *** Stanford researchers have advanced significantly by introducing an AI agent that autonomously uncovers valuable biological insights from single-cell RNA sequencing (scRNA-seq) data. While there is a wealth of publicly available datasets, many remain under-analyzed due to technical challenges and limited human resources. **CellVoyager** is an innovative LLM-based agent that operates entirely autonomously. It enhances previous analyses and generates novel hypotheses from scRNA-seq data. Key accomplishments include: 1. CellVoyager demonstrated superior performance, surpassing GPT-4o and o3-mini by up to 20% in predicting results from real author-conducted analyses across 50 scRNA-seq papers in the newly established CellBench benchmark. 2. The AI agent identified an intriguing finding: CD8+ T cells in COVID-19 patients exhibit significantly elevated pyroptosis scores (p = 0.001). This hypothesis had not been investigated in the original research. 3. It made a notable discovery regarding menstrual phase-specific receptor-ligand signaling between endometrial stromal fibroblasts and endothelial cells, a hypothesis confirmed by testing 40 gene pairs. 4. CellVoyager also revealed that increased transcriptional noise associated with aging occurs in specific brain cell types (microglia and oligodendrocytes), using data from a single-cell atlas of the subventricular zone. CellVoyager's architecture is particularly impressive. It employs a dual-loop system with an LLM planner (either o3-mini or GPT-4o) for hypothesis and code generation and a vision-language model (VLM) for interpreting outputs. The separation of planning and interpretation modules is particularly compelling, as it aligns with modular cognitive architectures in robotics that distinguish between perception and planning agents. Additionally, the self-critique step that follows each analysis plan generation is a valuable practice. --- B. Noted
14
1 Comment
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content