top of page
"Not Even Wrong" Podcast
Investing in fundamentally new concepts and engineering practices with large impact.

Episode, May 1 2024

Discussing book “Girl in a Book” by Kim Gordon. Sonic Youth. 1/3 Life accomplished. Self respect driven by zero to one type energy. Good story because it’s authentic. 2/3 New York art scene in 1970-90. Convolution of forces, rock, pop, classic, visual, fashion. What makes a place vibrate with potential? 3/3 Stylistically impressive biography. Generous with her thoughts and life story without provoking voyerism. Kim is a poet with noise.

Episode, April 30 2024 II

Sputnik FSD. Tesla China FSD engagement. 1/3 Sputnik moment for autonomously driving in US. Remove resistance of regulators by creating competition. 2/3 Inverse Manhattan project for FSD in China. Large scale deployment of FSD and experimenting with business models. 3/3 Monetize FSD. AB testing.

Episode, April 30 2024

Robert Dyro, Stanford. Modeling autonomous driving. Moravec paradox. The hard things are easy and the easy things are hard. 1/2 Plan agent actions under assumption of rationality. Like efficient market hypothesis. Good baseline but not realistic. Add uncertainty through Monte Carlo simulation. and optimization across trajectories. 2/2 Counterfactual sampling from real world examples. Generate edge samples to train agents in simulation. Model predictive control vs Q-learning.

Episode, April 28 2024

Vertical Integration - three case studies

Integrate if value chain is not productive enough.

  1. Recover Rare Earth Metals from Coal ash. Tesla should be investing in such upstream activities. When you have a car that drives 5-10x more you want cost of energy and powertrain/battery to last longer. 

  2. AI training. Two major problems. Parallelism and scarcity. Nvidia solved this problem for FSD training. But multimodal training for robots could be an opportunity for vertical integration. 

  3. Semantic maps. Mapping has been solved for location and distance but not for semantic understanding of environment. 

Episode, April 26 2024 II

Sara Aronowitz, Uni Toronto

In formal theories of decision-making, preferences are given. Learning and preference formation is memory dependent. Memory is path dependent. That’s why it can appear inconstant. It’s actually not.  

“I argue that the core function of any memory system is to support accurate and relevant retrieval."

Peter Caradonna, Caltech

Measure preference intensity, consistently across models and use cases. Is there a canonical way to measure how much more you like a versus b? Introduce nummeraire, money. Use arbitrage to separate logic from a-logic. Using humans to teach AI with RLHF. Make sure humans have diverse experiences, which leads to more diverse preference pairs. More information. 

Episode, April 26 2024

Mixture of Experts (MoE) paradigm and the Switch Transformer

Scaling matters. Power law. Compute, parameters, both scaling works. How can models be scaled and kept efficient? Models are more sample efficient because experts can divide work and take more advantage of data. Switch transformer. Turn off weights you don’t need. MoE. Activate experts that you need. One big model with trillions of parameters. Don’t use all of them at once at inference time. When you surf, you don’t use programming skills. Similar to neuromorphic compute.

Episode, April 25 2024

Douglas Hofstadter, "I am a strange Loop”.

DNA is self referential. The I is self referential. “I need by eyes to see and I need my I to be.” How can DNA create itself? How can the I define itself. Gödel. I-loop created by self referential systems that turn agent into an I. Collect information and act upon it. Then add self referential loop. Agents interact with environment through preferences. Some of those a deterministic, some fall into a self referential loop (cyclicality) and can lead to inconsistent behavior. Is this the source of human creativity?  What is the purpose of the I? According to Hofstaedter, the "I" is not just a static concept but rather a dynamic process that emerges from complex interactions within the brain. Meta reward function that optimizes for energy. This could be the path towards self referential robots that have Robinson Crusoe abilities.

 

Episode, April 24 2024

Tesla Q1 2024

Definition of good earnings call is = it makes you want to work. Shift in direction from EV manufacturing to Autonomous EV (AEV) manufacturing. Companies change direction when technology changes. Innovator’s Dilemma. EV manufacturing means battery, powertrain, scaling rapidly. AEV manufacturing means build a car that can drive without human and last 5X longer. “If you want to invest in Tesla, test drive FSD”. 

Arch of technology. Vision 2012, LLMs  2023 enable semantic understaffing of environment. Huge for robotics. Saving per mile, per minute. Do more with less. Value creation. Next step, native AEV products. Netflix keeper test for employees, investors. Is Tesla still the right investment for the job? Hire people that are willing to switch, subordinate themselves to technology. New tech requires new workflows. 

 

Episode, April 23 2024 II

ML Data attribution. Appel at AI community. Stop tinkering with the truth. Safety is not to misrepresent the truth. Tinkering with the truth is the path to an Orwellian nightmare. LLMs are a compact representation of knowledge on the internet. Tinkering with that representation is changing it to the liking of a few privileged sensors. This is the antidote of safety. 

Episode, April 23 2024

Ferens Krausz, Max Planck. Attosecond Physics. Design laser flashes that are pulsed in short time intervals. Shed light on electrons. The finer we see, the more we learn. 

Kathy Galloway, MIT. Micro/nanoscale reactive transport toward decarbonization

Integrating synthetic circuitry into larger transcriptional networks to mediate predictable cellular behaviors. Stochastic nature of transcription is challenge. 

Dr. Wen Song (UT Austin). She studies the microstructure of coal ash and then designs methods to extract REEs from there. Fluid-Fluid and Fluid-solid mechanics are discussed. The key to this work is studying the microstructure of coal ash with physical models and then derive fluid mechanics from those models to design extraction techniques.

Episode, April 22 2024

Sergey Levine, Data-Driven RL in Robotics,

Represent real world. Unsupervised pre-training. LLMs. VLMs. Do these methods encode knowledge? Where does the knowledge come from? Represents people who put it in the internet. Find optimal trajectories with offline RL. 

Alex Havrilla on TWIML podcast

Fine tuning LLMs with RL. Paper on how to teach LLMs with RL. Surprisingly most models, PPO, DPO, RLHF deliver similar performances. Why? They are mostly deterministic. Not taking advantage of exploration. How can exploration be improved? Is there an Alpha Go Zero approach to LLM reasoning? Self play. RL. 

Episode, April 21 2024

Foundation models for embodied decision making agents part 16. 

Ken Goldberg talk at Stanford. Discussing end-to-end and engineering in AI. 

Discussing paper MOKA leveraging VLM, LLM. How much priors do we need? 

MOKA paper references Hu paper, where they use VLMs to execute robot actions.

Does knowledge require priors? Reduce search space. Exploration? RL. 

Does an FSD car know physics even though it’s not trained on physics priors? 

Christopher Peacock, Columbia University

A priori truth, which is independent of experience and axioms is possible. I want to invoke Gödel. A priori truths within a given system exist but they can sometimes not be proven (incompleteness). New truths can be discovered. How can new knowledge emerge from existing axioms? How is Move 37 possible? Mathematics is not a priori. It’s a formal system of symbols and rules. 

Robots are proof that there is an a priori. We just don’t know it.

 

Episode, April 18 2024 II

Reaction to Dave Lee Podcast: FSD v12: Tesla's Autonomous Driving Game-Changer w/ James Douma

 

  1. Tesla has advantage in FSD training because they built a database of human driving data. 

  2. Implicit versus explicit heuristics. 

  3. FSD native products and services will propel Tesla to a real world AI company on Wall Street. 

  4. Optimus. Advantage for Tesla. Transfer learning from FSD for navigation. Speed. Iteration for low cost hardware. No data advantage. Iterate on engineering, when software is ready, hardware ready. Aloha

  5. Pipeline for real world robotics.

Episode, April 18 2024

Foundation models for embodied decision making agents part 14. 

Nikolay Atanasov, Elements of Generalizable Mobile Robot Autonomy

UCSD

  1. Robot Model. Using Hamiltonian dynamics for modeling speed, torque, force etc. reduces search space. 

  2. Environment model. Represent the environment. Semantics landmarks.

  3. Task model. LLM uses language description of task. Translate into automaton. Execute task.  In this paper the authors suggest an LLM base approach to scene graphs. 

Episode, April 17 2024

Discussing Stanford Robotics workshop. 1/2 Prof. Wu talks about representing physics in virtual space. Transformer with RL  and zero short learning. 2/2 Grace Gao on neural HD maps. Maps 1.0 is location. Maps 2.0 semantics. Multimodal transformer with LLM and visual representation.    

Episode, April 16 2024

Tesla transitioning from mass producer of electric cars to transport on demand. AWS business model. Robotaxi. Large distributed infrastructure based on real world AI, compute and connectivity. What is transport going to look like? Tesla is in the process of defining it. Turmoil in C-suite because problem set company is facing changes. 

Episode, April 15 2024

  1. Drew Baglino leaving Tesla. Keeper test (Netflix). Tesla morphing into software defined hardware company with focus on AI and distributed computing. Battery and powertrain still important but not key to future of Tesla. More software and AI driven executive branch because key problems are in this area. 

  2. Interest rates up. Bad for car loans. Pressure on car market. 

  3. Skepticism amongst scientists towards Elon Musk. Must be ground in science, not in status preservation. 

Episode, April 14 2024

Events 4/8-4/12 Part 4

1. Multi-Sensory Neural Objects: Modeling, Inference, and Applications in Robotics.   ❤️ Jiajun Wu of Stanford University

What makes an object an object. Unsupervised segmentation and 3D representation of objects for simulated robot learning. 

This paper summarizes Wu’s research goal:

My research goal is to build machines that see, interact with, and reason about the physical world just like humans.Inverse real to sim problem. The Galileo model, proposed by Jiajun Wu and colleagues, is a generative model for solving problems of physical scene understanding from real-world videos and images.Can foundation models solve some of the physics problems in robotics? Wu writes about challenges: 1/5 Data Scarcity. 2/5 High Variability. 3/5 Uncertainty Quantification. 4/5 Safety Evaluation. 5/5 Real-Time Performance

2. Spatially-Selective Lensing for VR Displays. Summary, Aswin Sankaranarayanan (CMU). 

Aswin developed a lens that can display multifocal images. You take a 2D picture and you have 3D vision because you can focus on different objects simultaneously. They achieve this by building a software and/or electronically defined Lohmann Lens, which is a system that creates a focus-tunable lens by translating two cubic phase plates relative to each other. 

Episode, April 13 2024 III

Events 4/8-4/12 Part 3 

1. Open AI presentation Jason Wei

Multitask. LLM models learn several tasks one by one. Scaling. Why does scaling work? More compute generates more learning. 

Hyung Won Chung. Recipe for AI. Develop progressively more general methods with weaker model assumptions. Decoder only.

2. Aaron Dollar - “Mechanical Intelligence” in Robotic Manipulation

Mechanical solutions to robotics problem. Example grasping. Use mechanical feedback.

3. Kyujin Cho Seoul National University. Title: Nature-inspired designs for innovating robots: grippers, wearable robots, and mobile robots

Episode, April 13 2024 II

Events 4/8-4/12 Part 2

Sasha Newton 

UC Riverside

 

Kant on truth. There higher, transcendental truth and there is empirical truth. Empirical truth is when our cognition matches nature. Knowledge is when lots of people agree with that truth. Higher truth is not directly accessible.

Observer duality. We can only see what we observe. There is more truth behind what we can see. We might be able to conjecture without observation (Einstein, Deutsch). Observations are theory laden. What we see is what we think. 

Popper on Kant. There is no a priori truth. But he agrees that we must conjecture. 

AI - hyper conjecture. Machines conjecture faster.  Are these AI system uncovering new truth or just more truth? Kant would say, more truth, Popper would say, new truths. 

Heisenberg. Uncertainty. If there is a minimal amount of certainty about a relative observation than there must be a minimal amount of truth. But what happens beyond that. Is that still truth? 

Episode, April 13 2024

Events 4/8-4/12 Part 1

Karen Leung

University of Washington 

 

Moravec paradox in self driving. Hard things are easy and easy things are hard. Why? Traffic rules. More agents constrain themselves. Reduce degrees of freedom. 

How to develop a data driven, flexible and robust analytical framework for safety in robot interaction. 

 

  1. Quantify safety with Hamilton-Jacobi reachability. Augment translation matrix with HJ variables. 

  2. Parametrize HJ so that it become learnable. 

  3. Train. What is good data.

 

The key to Karen’s work is how to develop a priori techniques to gage the safety of a robot. 

Episode, April 11 2024 II

Compound effect of technology. Innovation stack. When innovations build on each other. Scaling and main driver of wealth creation.

  1. Robot technology will get boost when 3D-fication of 2D images gets solved. Video and images can be used to train robots in simulation. Self driving car doesn’t touch things, doesn’t need 3D-fication of images. Robots do.

  2. Self driving car got boost and could be solved when vision was solved in the early 2010s.

  3. Compound value of technological innovation is like biological evolution. How do big changes in evolution happen? Species conquer new territory with techniques that have been dormant or not so relevant in previous biotope. See Neil Shubin and the transition from water to land. Same with technology. Some groundwork must be done before big leaps can happen, like solving 2D-3D in image to enable better simulation for robotics. 

Episode, April 11 2024

Foundation models for embodied decision making agents Part 13 Jitendra Malik, "When will we have intelligent robots?” 1/4 Bang for the buck in RL is adaptation, not just search action space. 2/4 Key concept for robotics is 3D-fying 2D images so robots can learn from video data. Compound value of technology. Solve 3D-fying 2D - train robots on video data - train robots on robot data. 3/4 Transformer architecture is key to robotics. Learn from action-state tuples and model robotics problem as next token prediction. Simulation. Once 3D-fying solved, simulation for robots will get boost, similar to when convolutional neural nets enabled vision and self driving cars. 4/4 Compound value of technology. Sometimes you have to wait for other components to kick in. Reminds me of the water-to-land argument by Neil Shubin. He argues that some ingredients were in place and then used later when they became useful. Same here, 3D-fying must first be solved to deliver robots at scale.

Episode, April 10 2024

State of play. 1/2 Tesla FSD 12.3 is launching a new vector of compound growth. FSD native products and services. Foundry for multimodal transformer stack. Achieve the NotNot - Can't afford not to have it. 2/2 US inflation elevated due to high deficits and debt. New government and fiscal discipline can revert this.

Episode, April 6 2024

Events 4/1 - 4/5 

  1. ❤️Kaiming He. Deep learning is about data representation. Compression, abstraction, conceptualization. Loop in forward and backward propagation. LeNet. Convolution, pooling, fully connected.  Alex Net, GPU. Data and model parallel. ResNet. Much deeper neural nets are good. Control for overfitting, gradient collapse  with normalization and regularization.

  2. Robot Learning in the Era of Large Pre-trained Models. Dorsa Sadigh, Foundation models.  Pre-train  representation from that data and then adapt to different tasks. Meta-learning at scale. What is the pre-training objective? Vision. Masked auto-encoding. This work builds on Kaiming He’s paper on Masked auto-encoders. What is good data? Novel and high success. Kick out bad data. What does Tesla do with bad drivers? Reward design. Use LLMs to enhance RL. 

  3. Learning to See the World in 3D. Ayush Tewari (MIT). Looking at 2D image and reason about 3D. Inverse graphics problem deals with the task of inferring 3D structure of a scene or object from its 2D image. 

Episode, April 5 2024

Tesla news on M2 and Robotaxi, Kaiming He, Representation and Douglas Hofstadter.

 

  1. Tesla apparently scaling back lower cost Model 2 to focus on robotaxi. Dynamic companies often change course because markets change or because their technology enables new avenues. The latter. FSD v12 has reached level 4 autonomy. Positive surprise. Now the risk shifts from technology to monetization and regulation. Low cost M2 still important for Wright’s Law

  2. Kaiming He talk on representation as key to deep learning. Take sensory input like pixels and find optimal representation. Representation depends on task. Goal is what a system eventually ends up doing. Contrastive learning requires clear features, for self driving occupancy enough. Fundamental problem of deep learning from data is to manage looping (forward and backward prop). One input can cause havoc in the network. That’s why AI researchers developed ResNets, Regularization, Normalization and methods for efficient initialization. It’s about patterns and relationships, not absolute values. 

  3. Douglas Hofstaedter in his book “I am a strange loop” formulates a theory of intelligence based on epiphenomena that are extracted from raw data. Intelligence is efficient representation. 

Episode, April 3 2024

Discussing “Picasso and the Painting That Shocked the World” by Miles Unger. Revolution in consciousness around turn of 20th century. Picasso, Einstein and Heisenberg. Seeing art through the brain. Realty is In the eye of the beholder, like in quantum physics. The painting "Les Demoiselles d’Avignon" defines a new path in consciousness. Process of creation. Today we are facing another paradigm shift in consciousness driven by AI. Hyper conjecture. Truth is probabilistic discovery. AI expands search space. The closest we have to the multiverse.

Episode, April 2 2024

Tesla Q12024 Production and Delivery Numbers

1/6 Below capacity and expectations. Build up in inventory. 2/6 Model 3 Osborne effect in US. 3/6 Production and shipping disruptions in Europe and Middle East. 4/6 Competition. Most important factors are cost and value. Tesla is world leader in those categories and will withstand competitive onslaught. 5/6 FSD adoption expected to increase because it’s very good. Value creation thourgh AI shifting from technological problem to regulator and political. FSD is nail in the coffin on legacy auto and they will fight it. 6/6 Musk’s political polarization not a factor because polarization goes both ways. 

Episode, April 1 2024

Structure and AI

Animesh Garg of Georgia Tech/NVIDIA. How much structure is there? How much do we need? Explicit vs. implicit structure. This talk argues that there is lots of structure, or low hanging fruit in model design. How do machines see the world? Break things up and let machines figure out how to assemble them.

UniSim

The key in this work is to take real images and model them with implicit geometries (i.e. end to end learned). Use those models to simulate different scenarios in a digital twin fashion and thus enable a self driving car or robot to learn end to end in sim. 

Episode, March 31 2024

The truck that shocked the world

 

Art is not an esthetic endeavor. It’s channeling  the underlying forces of nature. From “The Painting that Shocked the World” by Miles Unger. Cybertruck is radical design. Revolution in consciousness, like “Les Demoiselles d’Avignon” by Picasso. Radical means it’s embedded in breakthroughs in science and technology. Demoiselles roots in Einstein and Heisenberg.  Cybertruck reveals new scientific paradigm driven by AI. Line between human-machine blurred. Human intelligence and AI merging. Cyberbruck is like “Les Demoiselles”, a new sign post in humanity. Galileo (Observe and speak math). Newton (Make math useful for physics). Einstein (Spacetime), Heisenberg (Quantum Uncertainty). AI machine is human and human is machine. Lines are blurred. 

Episode, March 29 2024 II

Jimmy Ba. X.ai

Age of Image Net vs. Age of LLMs after ChatGPT. What is different this time?

Models with more parameters don’t necessarily perform better. Models with more compute do perform better.Related to Richard Sutton’s essay, “The Bitter Lesson”. Compute and search path way to intelligence. 

APE. Automatic Prompt Engineer. Define task and have the LLM improve itself through self prompting. Why think step by step? Why is chain of thought a useful thing? 

Explainability through Chain of thoughts. 

❤️ Episode, March 29 2024

Foundation Models for embodied decision making agents. Episode 11

 

Sergei Levine talk on Reinforcement Learning with Large Datasets 

Why does RL help improve outcomes from data that is generated by humans. Because machines have their own psychology and come up with Move 37 type solutions if trained in a hermetic, self-referential learning environment like RL. Use offline RL to scale. 

 

  1. Offline RL fundamentals 

  2. Applications to robotic foundation models. Large models that train robots for variety of tasks. Example ViNT, Q-transformer

  3. RL with generative models. How can RL be used to improve diffusion models? Dolphin riding a bike.

  4. Offline RL with LLMs. Better than RLHF? Is it? Can we rely on machine psychology? 

  5. Richard Sutton essay: “The Bitter Lesson”. Scale and search better than explicit human knowledge. But where is the design still relevant? Bootstrap a machine that can self learn hermetically in self referential and scalable way. What about alignment?  

See Podcast Site continued

bottom of page