top of page
"Not Even Wrong" Podcast
Investing in fundamentally new concepts and engineering practices with large impact.

Previous Page

 

Episode January 19. 2024

Announcing the start of a new series. “Foundation models for embodied decision making agents”. We discuss research and development in the area of embodied decision making agents (in short robotics). In particular we focus on autonomous driving and the expected transition from dedicated, task specific architectures towards a unified foundation model architecture for robots independent of domain specific tasks. We discuss research papers, seminars, and applications in industry.

Episode January 14. 2024

Part 2 of discussion “The Deluge” by Stephen Markley. 1/3 Biggest risk to Western civilization is catastrophism. That is when self proclaimed experts misaprorate projections of potential catastrophes for their personal benefit. Western institutions are robust when dealing with real disasters but run the risk of undermined collectivism when hijacked by prophets of doom. 2/3 Climate change is a bad problem because it’s not clear and thus opens the door to demagogues. De-carbonization is a good problem because it’s clear and solutions can be developed by innovations in science and technology. 3/3 The book has good narratives and deep character build up. Authors lacks domain knowledge, which leads to naive depiction of environments such as Walk Street, Government or the FBI.

Episode January 13. 2024

Ground truth. Where do we stand with AI in robotics? Tesla is working on end-to-end FSD. Is this possible? What does science think about that? Sergey Levine talks about off line RL and similarities to behavioral cloning. The latter is a predictive model, while the former is a decision making agent. Goal is to build a data driven decision making agent. Can a robot make better decisions than the data it is trained on? Ashok Elluswamy talks about world model at Tesla. Generalizable learning for robots in all kinds of embeddings. Pieter Abbeel says that fundamentally a decision making agent must understand what it doesn’t know from data and be cautious about it.

 

Episode January 12. 2024 II

Events recap. Week 1/8-1/12 1/3 Hannah Stuart, Berkley. Haptic sensing for robots. Use suction, sound and other haptic sensors to enhance robot perception and improve navigation, functionality and performance. 2/3 Yuanbo Zhang - Fudan Universit. Develop materials, i.e. crystals that act as Quantum anomalous Hall (QAH) insulators. Use for quantum computation, sensing, low power electrons, spintronics. Key is to find structure that best performs as QAH insulator. 3/3 Nafea Bshara (Amazon). Creator of Nitro. Nitro is a silicon and software interface that helps AWS manage workloads flexibly. Step by step increase in performance, predicable so customers know what to expect.

Episode January 12. 2024

Reaction to Deutsch Files I, podcast discussion between Naval, Brett Hall and David Deutsch. Creativity and intelligence? Deutsch seems to be negative about AI and creativity. Isn’t it just a matter of iteration and error correction? Creativity is the ability (and/or incentive) to explore counterfactuals at high speed and low cost of error and error correction. Turing machines can in principle turn into creative machines because they have all the components necessary for creativity. Whether it’s human or not is another question. What would Popper say about AI and creativity? In particular what would Popper say about Q-learning and the ability of machines to iterate around their own learning process?

Episode January 11. 2024

Neurips Part 6

1/3 Pre-Training for Robots: Offline RL Enables Learning New Tasks in a Handful of Trials. How can offline RL generalize? By representing knowledge. Example, doctor prescribing medicine. This can be generalized in offline RL and a decision making agent can be trained like that. 2/3 When Do Transformers Shine in RL? Decoupling Memory from Credit Assignment. Transformers are good at memory because they compress knowledge. Not so good at credit assignment. They don’t really know why they know things. 3/3 Bridging RL Theory and Practice with the Effective Horizon. In RL it is important to have step by step information even in sparse reward environments. In principle, RL is random exploration until you find something with high reward, then you stick with it until reward goes down. Then random again until high reward etc. Reward shaping helps nudge the process towards ultimate goal.

 

Episode January 10. 2024

Book Discussion Part 1 “The Deluge” by Stephen Markley. Anatomy of a Demagogue. Bend reality to serve your message. Hijack a real issue (in this case carbon saturation of atmosphere) and turn it into a religious contest (i.e fight climate change) with followers opposing non-followers. There is a Trump in everybody, regardless of where they come from, socially and politically. Climate change is a bad problem and prone to miss-use. The real problem is that industry is burning too much carbon. This can be solved with technology and entrepreneurial initiative. Climate change is a bad problem. Just ask yourself, what is the antidote to climate change? Static climate! That’s nonsense. Climate change is a trojan horse for Malthusian hypocrisy and collectivist coercion.

 

Episode December 29. 2023

What’s working and what isn’t. What's working: 1/3 Macro getting better. Less coercion. Less eco-socialism, less ethno-fascism, less Orwellian sloganism. 2/3 Transformer architecture in AI is breakthrough for AI and robotics. 3/3 China positive force and competitive pressure. Will revitalize sluggish US. What's not working: 1/3 US debt and deficit spending. 2/3 Complacency and lack of risk talking in an increasingly indeterminate economy. 3/3 Tesla committing unforced errors due to tabloid lifestyle of Elon Musk. Need more focus. Either come back or delegate!

Episode December 28. 2023

Neurips Part 5

1/5 Tree of Thoughts. LM to reason about the available thoughts on a decision tree and decide by using a value and/or voting mechanism. What’s worth pursing ? 2/5 Why do we think step by step. Locality. Training data is locally isolated but connectable. Locally structured data that can be connected promises to be more efficient for training 3/5 Why does In-Context Learning work? Transformers are good at choosing the right algorithm to solve problems based on prompts. 4/5 Why does Chain of Thought work? The reason CoT works better is because parallel complexity of tasks. Many tasks require sequential reasoning where CoT does better. On the limit, with enough size, transformers could be as good in CoT as in direct inference, but that is inefficient. 5/5 Statistical analysis of GANs and why they work. PhD thesis by Yannic. Image classifiers might be biased during training and thus vulnerable to perturbations. When they learn what a dog is, they don’t really understand what a dog is, they might be biased by low dimensional data which can be perturbed easily and thus the whole model can be thrown overboard.

Episode December 27. 2023 II

An investor’s guide to Neurips. In the hedge fund world there are three major decisions. 1/3 How to choose a theme? 2/3 How to navigate a theme and choose investments? 3/3 How to decide whether you’re right or wrong about investments. In this episode we focus on the second, how to navigate a theme. In our case it’s the application of AI to the real world. We use Neurips as a case study. How to navigate a large conference like that? Narrow your focus, follow the program, invest time in random exploration and keep the social part goal oriented (party is fun, not work).

Episode December 27. 2023

Discussing Peter Thiel’s comments on lack of progress in science and technology. 1/6 Fiat money fosters culture of indeterminism 2/6 This leads to indeterminate industries like Wall Street, Insurance, Law, Medicine etc. where virtue signaling gets you promoted, not performance. 3/6 Academic industrial complex is infected with “Glasperlenspiel” syndrome. 4/6 No risk taking culture. Whether it’s money, career or scientific, risk taking is neither necessary nor encouraged. 5/6 Lack of progress raises risk of slump into totalitarianism. 6/6 Companies like Tesla, Nvidia, Space X, Microsoft and few others are hope. Best way to shake up academic industrial complex is to disentangle pipeline of new hires from academic industrial complex.

Episode December 25. 2023

Neurips Part 4

Deep Mind 1/6 Promptbreeder. A general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. 2/6 Self Debugger. Teaching a large language model to debug its predicted program via few-shot demonstrations. 3/6 Chain of Code. For numerical summaries a chain of code gives better results. The LM can use to the code to run through the text and find the answers (like how many times does the author use sarcasm?). 4/6 FunSearch. Discovery through evolution. Search function space, write function in computer program, evaluate. repeat. 5/6 Robotics. What is RT-X, Open-X, generalization through sketches 6/6 Understanding Diffusion Objectives as the ELBO with Simple Data Augmentation

Episode December 21. 2023

Neurips Part 3

1/2 Interconnects podcast with Nathan Lambert (not part of Neurips but related to discussion). Interview with Tri Dao and Michael Poli from together.ai. Discussing State Space Models and new model architectures beyond attention. 2/2 DATACOMP, a testbed for dataset experiments. What data sources to train on, and how to filter a given data source? Central Dogma of ML, data - model - algorithm not fixed.

Episode December 20. 2023

Neurips Part 2

1/4 Using LLMs for traffic scenario planning. 2/4 Eureka, LLMs for reward function design. 3/4 DPO, Direct Preference optimization. Offline RL for optimizing LLMs without RLHF. 4/4 OpenAssistant. An open sourced data set of thousands of conversations between LLMs and humans. Can be used for DPO and/or RLHF.

Episode December 19. 2023

Neurips 2023 Part 1

Chris Re lecture

Two concepts that lower the compute budget and solve for memory wall problems in transformers. 1/2 Flash Attention. Block wise subdivision of matrix multiplication so that matrix fits on SRAM. Stats to make sure no information is lost.  2/2 RNN and CNN revival. Can efficient filters be used to circumvent high compute and memory budget of transformers? Yes. Using mathematical concepts borrowed from signal processing, information can be preserved even over longer periods of time.

Episode December 11. 2023

Book discussion "The Rachel Incident" by Caroline O’Donoghue. “What matters most about a person is direction” “His attempts at adult life resembled Peter Pan trying to trap his own shadow” 1/5 You dress for the weather or you make the weather. Take your life in your own hands. “We were both sad and we did not want to be convinced to get out of our sadness” 2/5 People choose to feel a certain way. Feeling is about convincing. You can change somebody’s feeling, but it takes time and effort. You have to be ready for it. 3/5 Coming of age novel about a young Irish girl “Our conversations felt like cover songs of our real conversations” 4/5 Love, commitment, responsibility, selfishness. What does it mean to be with a person? What is direction? 5/5 Ireland in the 2010s, recession, suffering, hope.

Episode December 9. 2023

Events discussion week 12/04-12/08 Part 2

1/2 Stefano Ermon. Diffusion models. What is diffusion? What is latent diffusion? Parallelize diffusion steps with ODEs. Use DPO (direct preference optimization) instead of RLHF.

2/2 Minkai Xu. Equivariance and invariance are symmetries which can be exploited in molecule modeling. Saves time, compute and improves results.

Episode December 8, 2023

Events recap 12/4-12/8 Part 1.

Quantum Computing. 

1/5 Chasm between Quantum Computing and Machine Learning has not been crossed yet. Memory wall is an opportunity.

2/5 Joonhee Choi on Rydberg atoms for Quantum Computing. Long coherence time.

3/5 L3 Harris on Rydberg atoms for RF sensing. Large dipole allows for highly sensitive RF sensing.

4/5 Phasecraft on devolving algorithms to accelerate application of Quantum Computing in quantum chemistry.

5/5 IonQ on demand for quantum. Fomo demand. Chad Rigetti limited excitement.

Episode December 7, 2023

Reaction episode to Brett Adcock from Figure.ai on FYI. 1/4 Vertical integration in humanoid robots because hardware not available. Need for low cost hardware and robust, learning ready robots. 2/4 Bootstrap robots for learning. 3/4 End to end algorithms with LLMs for semantic organization of scene. 4/4 Real to sim and sim to real? How to build long term learning pipeline.

 

Episode December 4,2023 II

Events discussion week 11/27-12/01 Part 4 1. Xin Duan. Neural circuit mapping. Trace neural activity from input to brain. 2. Jean Fan. Visualizing cell formation and cell differences by capturing RNA information and statistically processing it. 3. Alex Levy. Extracting 3D models from 2D pictures. Think Nerfs without knowing where the camera is. A generalized model to learn from 2D and extract into 3D. 4. Storage X. Stefan Reichelstein. Modeling a city based 100% on renewables and storage. But why even have storage? Why not just use flexible manufacturing to adapt to electricity needs and have ample renewable capacity. 5. Taran Driver. Slac. Using atosecond pulses to visualize electrons when they move in molecules. 6. Ciamac Moallemi. Mechanism design. Automated Market Makers for Defi. A Black Scholes inspired model.

Episode December 4,2023

Events discussion week 11/27-12/01 Part 3 1. Nathan Lambert on RLHF. Could preferences be gamified, or incentivized through financial reward. 2. Andrew Saxe. Why do we have DNNs? Why deep layers? What happens in the layers. How do models learn. What about local minima? 3. Tris Warkentin. AI in operations. What is AI? How does it affect workflows in real companies. 4. Tinglong Dai. Incorporating AI into healthcare workflows. Liability could play in favor of AI.

Episode December 3,2023

Amendment to Episode about Carlotta Pavese. 1/3 Defining intelligence via programming runs danger of circularity. 2/3 If you use intelligence to classify people make sure you know what you’re talking about, i.e only consider a skill and/or intelligence for classification if you know how to program it. The rest is gibberish. 3/3 Concept of good enough. Intelligence ought to be human centric like "be funny", "make others feel good" etc. people feel good etc.

Episode December 1,2023 II

Events discussion week 11/27-12/01 Part 2. Philosophy, Carlotta Pavone on intelligence socialism. 1/2 What is intelligence. Adaptive, flexible, learning. How about: If I can program it, it's not intelligence. Intelligence is what we cant teach machines. Human intelligence. If we reduce intelligence to intelligence socialism, what is left to distinguish us from machines. 2/2 Intelligence is a random pretext to justify hierarchy.

Episode December 1,2023

Events discussion week 11/27-12/01 Part 1. Cybertruk delivery event. Price, performance muted. Important technological platform for future products. Casting, voltage, battery tech. 2/2 Tierry Tambo. Flexible hardware design across stack for ML and LLMs. Next thing is multimodal. Three interesting approaches. Adaptive Float to deal with sparsity, eDRAM and customized algorithms to reduce power consumption per task, high refresh storage and Early Exit, DVFS (Dynamic Voltage Frequency Scaling) to flexibly adapt depth on neural network to requirements of workloads.

Episode November 28,2023

Discussing Corl 2023 Part 3. 1/2 Dieter Fox. Foundation model for robot manipulation in sim. Then sim to real. Real to sim. Bootstrap. Foundation model is supervised learning. Tokenize space. URDF. Robotics problem is putting URDF in reverse. 2/2 Sergey Levine. Offline RL. Supervised learning. Bootstrap.

Episode November 27,2023 II

❤️ Book discussion “Elizabeth Finch” by Julian Barnes. Some things science cannot tackle like beliefs and love. Literature can. Barnes tackles those two topics. 1/2 Love and happiness are learnable. “All happy couples are happy in the same way and all unhappy couples are unhappy in their own way. “ You can earn happiness by getting better at it. Like a muscle. 2/2 What if Julian the emperor had not been killed, and Hellenic pantheism prevailed in Rome and later Europe? Would there have been a need for the Renaissance? What about the Enlightenment? How come science emerged out of Christianity?

Episode November 27,2023

❤️ Every happy robot is happy in the same way and every unhappy robot is unhappy in their own way. Intelligence convergences. Iteration, recursive updates, learning from experience, low cost error correction. Sim to real. Real to sim. Repeat. Sim can help in reward function and policy design. Robots need a religion, something to guide them about what is right and wrong.

 

Episode November 26,2023

Discussion CORL 2023 Part 2. Highlighted papers: 1/8 LLMs for traffic planning. Prompt a traffic scenario and the simulator does it. (for example, car turns left and then sees a pedestrian 20m away right in the lane). 2/8 White paper on using MFMs for generative simulation. MFM=Multimodal Foundation Model. 3/8 RLHF, Nathan Lambert discussions RLHF and mentions canonical paper on this topic. How to estimate a reward function from behavior. What if we don’t have a reward function but we have a sense for what is good behavior? 4/8 Offline RL. Discuss Sergey’s comments and the paper. Sergey argues that every ML problem is in essence an RL problem. The paper solves the problem of off line RL, which is based on fixed data sets. What if new situation arises in inference? Solve problem of working extrapolation with regularization. 5/8 Koopman operator. Can the dynamic environment be reduced to linear functions? What about using stable diffusion and transform architecture to solve the dexterity problem? 6/8 ViNT: A Foundation Model for Visual Navigation. Tokenize vision. Transformer architecture. Zero shot learning for self driving car. 7/8 MimicLearning. Use human action to learn from and then execute with low level robot skills. Kind of what Tesla has been doing. FSD is only 2D. Easier. 8/8 Planning in a multi agent heterogenous driving scenario. Game theory, behavioral modeling. Similar to paper by Fei Miao discussed previously about tridirectional relationship among communication, learning and control.

Episode November 24,2023

Discussing CORL 2023 Part 1. Key takeaway for me is Sergey Levine’s idea. 1/4  Scalable learning will lead to generalizable robots. But it goes the other way, too. Generalizable robots will enable scalable learning. Researchers should prepare for that and think about data engine, benchmarks, compute stack and algorithms for that kind of environment. Simplicity, low cost. LLMs are simpler than NLP recipees before that. 2/4 Hardware. Adaptive robots must be able to make mistakes and not break all the time. 3/4 How to bootstrap such as fleet. 4/4 Tesla is on this path. Build low cost, at scale robots. Both, car and humanoid robot. Low cost drives adoption, drives data generation, drives learning, makes robots better. AI flywheel in robotics.

Episode November 20,2023

Events recap Part 3 week 11/13-11/17. 1/5 Joonhee Choi. Measuring entanglement entropy in a system. How much information is encoded in entanglement? Approximations and extrapolations. 2/5 Yonatan Cohen. Quantum Machines. Pulse level control of quantum computers. Better interaction with quantum hardware. Bridge classical with quantum world more efficiently. 3/5 Christoph Leuze, Augmented Reality. Assist people in tasks. Could be used to train robots. 4/5 Anqi Zhang, electrode implants for brain through blood vessels. No surgery needed. 5/5 Jens Kober, Human teacher for robot learning. How to teach robots within context of RL.

Episode November 18,2023

Events recap Part 2 11/13-11/17. 1/6 Daniel Worledge, Spin Transfer Tork MRAM (Magnetic Random Access Memory). 2/6 Storage X, Jiyun Kang, Cooling system for batteries. 3/6 Eleni Katifori, Fluid dynamics in systems with memristor type junctions. Complex evolution of networks. Can be used for data flow models or data flow in soft robots. 4/6 Zerina Kapetanovic, Low power communication by using ambient temperature via Johnson noise. 5/6 David Goldhaber-Gordon, create artificial atoms by squeeing electrons into small 3D space. See how they behave. Create artifacts of materials see how they behave. 6/6 Xiang Cheng. Modelling bacterial swimming behavior in fluids. Complex, nonlinear fluid, systems

 

Episode November 17,2023

Events recap Part 1 of week 11/13-11/17. 1/4 Shirin, Neural data compression. Diffusion for data compression and ML. Same thing, different angle.  2/4 Agrawal, Robotic dextrous hand manipulation, walking on ice, combing proprioceptor data with visual data. 3/4 Song, Robotics data collection, robot complete. Use LLMs for high level robot planning. Diffusion for path planning. 4/4 Raina, flexible hardware design. Configurable logic and memory tiles. Adjust instruction set architecture to compiler while flexibly adding functionality to accelerator chip.

Episode November 13,2023

Book discussion “One True Loves” by Taylor Jenkins Reid. 1/4 Love is something you can become good at. You need a good partner to train, like a dance. 2/4 Love is about reciprocity. 3/4 Loyalty is ephemeral. Loyalty and love don’t go hand in hand. 4/4 Identify shifts with time. Love is independent of identity but life isn’t.

Episode November 12,2023 II

Weekly seminar recap 11/6-11/10 Part 3. 1/6 Adam Kaufman on using nuclear spin for quantum information and spin squeezing for atomic clock precision. 2/6 Charles Marcus talks about quantum dots bouncing of superconductors, creating Majorana electrons. Superconducting, quantum entanglement and quantum information. 3/6 Christoph Naegerl on one dimensional bosons, ultra cold as in nano-kelvin. How can you even measure temperature at that level? How do ultra cold bosons behave. Innsbruck is powerhouse in experimental quantum physics. Naegerl works a lot with theorists. 4/6 Dave Donaldson, Economics. Measuring misallocation in economy formally through input variations in firms' reaction to demand shocks. The less shocks, the less misallocation. 5/6 Fei Fei Li book party. “The Worlds I See”. AI community has to focus on measuring human aspects of models. What is a good model? What is fair? What is true? 6/6 East Asian Philosophy. What is role of literature in forming moral compass? What is role of literature in age of AI. Who is going to teach the machines?.Confucius vs. Laozi.

Episode November 12,2023

Weekly event recap 11/6-11/10 Part 2. Data geometries impact neural net geometries. Use geometric algebra and inherent symmetries for robot learning. Exploit those symmetries for learning and inference, in particular when compute and power budgets are limited. Compressed data analysis with geometric priors. SueYeon Chung, Taco Cohen, He Wang, Ajil Jalal.

Episode November 11,2023

Weekly event recap 11/6-11/10 Part 1. 1/4 He Wang on sim-to-real robot grasping. Focus on geometry. Robotics must bootstrap itself to create data set. Tesla is doing that. Low cost, scalable. Focus on industrial application so robots scale. That way we bootstrap data set. What can robotics do for sim? 2/4 Ding Zhao on continuous learning and task specific adjustment of models. 3/4 Fei Miao on quantifying uncertainty in perception. Also multi agent behavior modeled through RL. What is collective policy, intent and reward function. 4/4 Philosophy Confucius vs Laozi. Define everything or focus on essence of things.

Episode November 10,2023

Robotics must bootstrap itself with low cost robots at scale. Learn from them. Build data set like internet did for NLP. Tesla is optimizing for low cost of compute per watt and low cost of actuators. Transfer learning from car for depth estimation through camera.

Episode November 5,2023

Discussing “The Wager” by David Grann. 1/4 Natural experiment in Anthropology. How do people behave on long confined voyages? English navy found way to organize to become global power. Who will do that for Mars? 2/4 Cast away. People adopt rules from motherland. Would that also happen with Communists? 3/4 Mutiny. Eventually rebel forces take over. 20-60-20. 4/4 Empires deal with news the way it suites them. "Truth is when you strip events from the ornament of narrative."

Episode November 4,2023

Event summary week of 10/30-11/3. 1/8 Active oxides for physics based in memory compute. Memsistors. Philip Wong, Carbon nanotubes for energy efficient compute. 2/8 Autonomous flight vehicles collision avoidance through Reinforcement Learning. 3/8 Automate bio labs, robotics for enzyme and protein engineering. How do you solve exploration vs. exploitation problem with science robots. 4/8 Physics, quantum spin ice. Novel properties in electric and magnetic conductivity. 5/8 Solar astrophysics. Working on space weather prediction caused by solar flairs. 6/8 Operations Management. Design flexible resource networks to adaptively satisfy stochastic demand. Why isn’t every resource allocation problem a network problem? 7/8 Jo Fox on crisis of humanities. Solution, teach machines what it means to be human. What is a novel, what is a good summary, what is fairness etc. AI needs some sort of humanities. But what? 8/8 CARS. Sustainable mobility. Papers on RL for ride share allocation. Use LLMs for edge case detection in vision.

 

Episode November 3,2023

Market rally due to macro relaxation. 1/2 housing down. Housing is a "Marie Antoinette" economy where growth allegedly comes from conspicuous consumption. No. Growth comes from more for less type innovations. 2/2 Fed is resisting monetizing government deficits. Dept. of Transport event at CARS (Center for Automotive Research Stanford) shows difference between government involved research and productive science. Mission creep. Transport cannot solve the economic divide.

Episode November 1,2023

Discussing Bay Area Robotics Symposium 2023 (BARS). 1/6 LLM architecture for sensorimotor data. Tokenize motion along six degrees of freedom and use masked models for training. Predict next move. 2/6 LLMs for context, for example, vision. Use language to understand WHAT, WHERE and WHY. 3/6 Repository for sensorimotor data. 4/6 Tactile sensors, soft robotics. 5/6 Multi agent robots. Aerospace and terrestrial transport. 6/6 Simulation is key. Malik.

Episode October 27,2023

Academic week review. 1/9 Cosmology. Find signal to noise to build model of how universe evolves. Find structure. Reverse engineer evolution of Universe back to Big Bang. 2/9 Digital Economy Lab. Research on how generative AI influences workplace. Leveling of playing field 3/9 Chemically engineer soft liquid for touch sensors. 4/9 HAI conference. Learn from machines. Questions about what is creativity, what is fair, what is good AI etc. 5/9 Physics Vladan on Atomic clocks and Rydberg states for quantum information processing. Ten logical cubits already! 6/9 Creative writing reading. Poets must process life to compete with AI. 7/9 Onur Mutlu. DRAM space is commoditized but constitutes big risk for future of compute. Compute must be data centric. 8/9 Scott Aaronson on how to make sure we know when something comes from AI. Watermarking 9/9. BARS. How to use LLMs for robotics, talked to Malik about whether we should focus on simulation (yes, but resistance in field). Nerfs + occupancy solve for depth and robotics.

Episode October 23,2023

Digital Economy Lab Seminar with Ethan Mollick (Wharton). Research on impact of AI at BCG. 1/3 AI levels difference of quality in employees. 2/3 80% of work can be done by 20% of employees 3/3 Creativity cannot be achieved with AI, but measured. Impact on Wharton? Use AI as baseline and teach students everything else such as leadership, interpersonal skills, risk, creativity. What is creativity? Use AI as baseline and define creativity as what humans come up with ex AI.

Episode October 20,2023 II

Baylearn 2023. 1/4 Percy Liang on benchmarking. Deep questions about AI. Example fairness can be solved for analytically in Kantian and/or Ralws way. 2/4 Christopher Re on aligning models with data. Data is the key and models need to adjust to how data flows through the stack. Adjust the stack. 3/4 Applying LLM architecture to more use cases and/or combing it with other dedicated models such as vision. 4/4 Berkeley team presents paper on using LLM architecture for video generation.

Episode October 20,2023

Tesla post earnings discussion. 1/4 Volume growth reset. 2/4 Margin through? 3/4 Earnings through. 4/4 Energy biz good and is silver lining of otherwise somber call. Questions for company: 1/4 AI computer architecture. Why, how will it materialize. 2/4 FSD end to end? How will it materialize. 3/4 Will Tesla sell software. 4/4 How to model energy business.

Episode October 18,2023

Book discussion “The Spectator Bird” by Wallace Stegner. 1/3 Fine line between love and companionship. 2/3 Living in Silicon Valley by choice. "We like others to envy us". Depth and beauty unmatched but you’re not in the thick of things. 3/3 Organized breeding and genetics is a scary technology to cope with. Attracts lunatics. Bad. But can be great if used properly.

Episode October 17,2023

Discussing Tesla earning call preview. Margin vs volume 1/3 amortize fixed cost 2/3 amortize software 3/3 FSD . Push volume now vs near term margins for long term cash flow. Items to mention 1/6 Cybertruk 2/6 semi 3/6 4680 progress 4/6 vertical integration (mining) 5/6 China 6/6 power market, energy prices, utility business.

Episode October 11,2023

Discussing HAI seminar talk at Stanford on autonomous agents. 1/3 Intrinsic motivation. Is there such a thing as an agent without intrinsic motivation? 2/3 Curiosity. Exploration vs exploitation. How do you encode curiosity with long tail payoffs? 3/3 Autonomous agents as code assistants with LLMs. What’s are they actually incentivized to do? Is self play an option?

Episode October 8,2023

Discussing “The Sportswriter” by Richard Ford. 1/4 Life is a film covering your body. Break out. Feel the cold air on your cheeks. Feel like a child. 2/4 Nietzsche allegories. Life happens to us. "Sports writers live in their minds and on the edge of others”. “Team talk is wrong. It’s like the dynamo of the 19th century. Leaves out the hero.” 3/4 Don’t let things happen to you. Make them happen. Sports, Academics and other hero type professions are declining because there is not enough agency. US turns into a paper pushing society. 1980s. Today better. 4/4 Death is only a problem if you don’t live life at the fullest. “Death is problem, it’s too severe, to unequivocal, a mistake in addition." Notable quotes

Episode October 7,2023

Discussing our most recent essay “The Perils of Monetocracy”. 1/6 The Fed is a culmination of expedient solutions to immediate problems that have taken a life of their own. 2/6 US was built on pillars of liberty, prosperity and fairness. 3/6 Fed was established in 1913 and is orthogonal to those goals. 4/6 Key problem is toxic relationship between Congress and Fed, which is monetizing debt. 5/6 Solution is experimentation through competition among constituencies. 6/6 Uncertainty Principle of Political Economy = You can never precisely define and achieve multiple policy goals at the same time. That opens the door for experimentation. How to optimize Liberty, Prosperity and Fairness.

Episode October 3,2023

We discuss a potential solution to the current macro problem. What’s the problem? 1/2 Housing market bid/ask paralyzing economy. 2/2 Too much government spending. Solution 1/3 Slow adjustment of housing market by adding supply and lowering price closer to bid. Sour mortgages can be absorbed gradually by Fed. 2/3 Government constrained in spending. Must reduce entitlement spending. 3/3 Silicon Valley innovation drives productivity and growth.

Episode September 28,2023

Some things you can’t measure. For everything else, there’s Physics. Quantum Physics is the study of the limits of what humans can measure. Same applies to Economics. Liberty, prosperity and fairness don’t commute. In the limit they are not achievable concurrently. This is analogue to Heisenberg Uncertainty Principle. Also Goedel Incompleteness Theorem. Continuously finding new, better solutions. Newer fully right. Connect Uncertainty Principle to Popper and Deutsch. But! In order to be wrong you have to be in the right path.

Episode September 25,2023

The Fed is talking a stance. Rebelling against being used as ATM by Congress. No more unlimited monetization of government debt. Short term bad. Long term good. Higher for longer means “Congress, stop spending!” See interview with Thomas Hoenig.

 

Episode September 21,2023

Discussing our essay “The World Runs on Compute”. 1/6 Most productive companies dominate supply chain and eventually market. 2/6 Productivity is driven by compute. 3/6 Libertarian Paradox. Software Eats the World meets The Innovator’s Dilemma. Disruption is good. But it creates high market concentration and dominant firms. 4/6 We call for a constitution for corporates modeled after the US constitution. 5/6 Implication for investments. Applying compute to solve real business problems creates wealth and outsized returns. 6/6 Society must protect innovation. Disruptors must share wealth creation with society.

Episode September 19,2023

We live in a Monetocracy. Discussing the repeated intervention by Fed in bond markets 2008 and 2020. Both events were nails in coffin of liberal market democracy. Fed killing price of risk and turning into central planning agency with political agenda. Now we have Fed officials talking about influencing climate policy etc. This must stop. 

Episode September 13,2023

Discussing book “Solito” by Javier Zamora. 1/3 Hopeless journey to the place of hope. 2/3 Poetry. Describing sensual experience and phantasy, 3/3 Glimpse of humanity in the dark.

Episode September 11, 2023

Discussing physics seminar attended at Berkley. Cosmology is like Economics. Lots of macro data, trying to figure out patterns, build models, calibrate parameters and use them for inference. AI drives physics and physics drives AI. AI drives physics with data analysis. Physics drives AI by developing better mathematical and statistical models for data analysis. In particular, find ways to reduce data analyzed while maintaining performance.

Episode September 8, 2023

High productivity companies will dominate markets and grow to GDP size. New discipline in economics, political economy and constitutional law is required. How to govern government size companies? When firms become dominant and large, governing them is more important than braking them apart. We need more thought about how to deal with massive power. Productivity = Fast iteration, low cost of error correction and low cost of error. The ultimate function of Anti-Trust division of government is to limit power of firms and keep government monopoly.

Episode September 7, 2023

Discussing two papers from SITE 2023. 1/2 Firms organize around productivity within value chains. Flip this around and state that value chains with high dispersion of productivity are not going to last. Either they vertically integrate (Tesla) or they turn into lemons (Uber). Productivity dispersion happens when new software enters market. Predict more vertical integration driven by software and AI companies. 2/2 Forward pricing of options shows that FOMC and CPI announcements are priced as more risky after 2022.

Episode September 6, 2023 II

SITE (Stanford) seminar on asset pricing. Economists are like algo traders - all beta, no alpha. Interesting comment:  "US would have gone above 150% debt/gdp in war with Japan if not dropping the atomic bomb and stopping the war. You see what you think and think what you see.

Episode September 6, 2023

Is Web3 water in the sand or the build out of a modern area Golden Gate Bridge? Problem with Web3 is not the tech, it’s the investors. Venture Capital is too much about “capital” and not enough about “venture”. Currently Web3 is more water in the sand than Golden Gate. Long term business needs short term bridges like Space X with launch service or Starlink.

Episode August 31, 2023

The AI opportunity =  Emergence of a new compute architecture. Every technology revolution is driven by new compute. Accelerated compute = parallel + interconnect + memory. Low cost per AI training workload, scalable and low power. Ultimately it’s the emergence of entrepreneurs with guts that drive progress, neither science nor capital. Long term it’s gutsy decisions such as DOJO, CUDA, end to end FSD etc. that drive revolutions.

Episode August 30, 2023

End to End is the Robot’s best friend. Tesla shows off FSD version 12 which is "fully End to End". Learn to drive by learning, no explicit programming. Big step towards real world AI. Tesla at the forefront because they are pushing for it and because they can. This is as important as launch of Model S, 3 and Y. Remove explicit programming from AI training and add it to data curation.

Episode August 26, 2023

It's size by scaling not scaling by size. Wealth creation = scaling technologies. Exploration in academia, Exploitation by entrepreneurs. Scaling is the knowledge that creates wealth. Quantify. “What gets measured gets done”. How to deal with ever larger companies and increased market power?

Episode August 25, 2023

Discussing Ritchie Robertson “The Enlightenment”. Explanations. People are the entity that generates explanations. Explanations are substrate independent. People don’t have to be human necessarily. Democracy of explanations (not just ideas). Risk of Enlightenment = Faust. Risk to Enlightenment = Central Banking. We are still living in the Enlightenment. Rationality-Newton-differential equations. Today new compute paradigm with neural net architectures. Away from pure rationality. Risk of relativism.

Episode August 23, 2023

Discussing Nvidia Q2. Platform shift from general purpose compute towards accelerated compute and generative AI. Nvidia has scale, reach (data center, robotics etc.) and depth (GPU, Networking, CUDA). Key advantage is Architecture. GPU, networking and software (CUDA) are combo necessary to deliver solutions in modern compute environment. Scaling is key. Nvidia is good at scaling, like Tesla.​

 

Episode August 21, 2023

❤️ It matters what cats can do not logic. Hardware first, then intelligence. Jitendra Malik on Robot Brains. Small Science - Big Science. My opinion - Exploitation is for entrepreneurs, not science. Exploitation = Scaling. Science leaps when time is ripe. Imagenet needed GPUs. Self driving car needed vision. Robots require soft polymers to absorb falls so they can keep falling and learning.

 

Episode August 17, 2023

Tesla shares underperformance. 1/4 Disappointing Earnings Call. 2/4 Price declines in China 3/4 Departure of CFO 4/4 Higher US rates - unwind of Yen Carry Trade

Episode August 12, 2023

❤️ Discussing David Deutsch conversation with Naval and Brett. 1/4 Knowledge is explanations tested against nature. Don’t get drawn into definitions. 2/4 Tautology. Isn’t nature just a theory humans formulate to understand the world. Where is the demarcation line between theory and nature? Isn't nature just another word for abstraction of reality, i.e. theory? 3/4 If there is no absolute knowledge, no king of knowledge then what is? How do you control for Nihilism? 4/4 Good explanations proliferate throughout the multiverse. They solve problems created by previous explanations. Kepler - Newton - Einstein.

Episode August 5, 2023

John Schulman on Robot Brains. AI can find better ways to read nature and solve problems. Self attention in Biology could lead to new insights. Leapfrog human constraints such as thinking in liner terms. LLMs could capture more complex relationships. For example tokenize nuclear spin of Atoms in brain and find quantum operations in brain. Matthew Fisher at UCSB looks at nuclear spin of Li6 vs Li 7 isotope and finds effects on cognitive functions. Conclusion is that nuclear spin might interact with brain through quantum operations. Spintronics vs. electronics.

 

See Podcast Site continued

bottom of page