
About
Focusing on AI Investment Positions: Vice President at 5Y Capital (2022 - Present), Tech Investor at Bertelsmann Asia Investments (2019 - 2022), Investment Professional at ByteDance (2018 - 2019) Skills: 公司金融, 团队合作, 估值, 投资银行, 私募股权, 投资, 尽职调查, 财务 Recent Posts: firmware folks! $10K for porting our tiny zephyr firmware https://lnkd.in/gq4YtpjD to the consumer device. >port the current firmware (which works well for omi devkit2 - nrf52840, locked to the 2.0.8 version) to the omi consumer version (nrf5340, nrf7002) >add support wifi for sdcard syncs feel free to dm me. https://lnkd.in/g4k7mDGK #firmware #zephyr #omi Probably the milestone I'm most proud of: $100,000,000 in ARR 💯 True customer impact, value, and love aren’t expressed through lofty funding rounds… ...customer preference isn’t measured in likes or awards... …it’s in Revenue—because customers vote with their wallets, every single day. To all our customers who believed in us: thank you for the vote. We’re just getting started. Dear my SF friends!👇 Check out this fun episode featuring Gabriele and me, diving into 3D GenAI! 🍎 🚀 Join Meshy - The Best 3D GenAI platform! We're scaling our dream team: • Generative AI Researcher • Machine Learning Engineer • Graphics Engineer • Sr Performance Marketing Specialist … 💰 Earn $5K−10K for successful referrals! 📩 Apply today → careers@meshy.ai #AIJobs #3DModeling #TechCareers #MachineLearning 4 desks available in SF for 30 days for free Show me why you are cool and what you are working on and get a dedicated table nearby me Most interesting: - smartglasses, BCI - software with many users Happy to share that I’m starting as Director of Creative Solutions at MeshyAI! Taking the jump… Thanks to everyone who cheered for me along the way. There is too much happening to be on the sidelines! I am creating more, faster, better than ever. If you missed the GDC talk and want to start integrating 3DGenAI in your 3d concept art workflow, please reach out! :-) I built a wearable ChatGPT that actually knows you Meet OmiGPT → Thrilled to announce Isomorphic Labs has raised $600M to turbocharge our mission to one day solve all disease with the help of AI. I've long felt that improving human health is the most important thing we can do with AI and today marks a big step towards a new era of drug discovery. The round is led by Thrive Capital with participation from GV (Google Ventures) and our existing investor Alphabet, and we could not be more excited to be partnering with these top-tier AI and life science investors to help realise our ambitious mission. This is what science at digital speed looks like! Read more: bit.ly/42cnRlg I built a wearable ChatGPT that actually knows you Meet OmiGPT → The advancements in 3D GenAI are pretty crazy... and it goes beyond just generating meshes MeshyAI just released a new update, and a few things stood out for me. - The textures are now much better aligned to the geometry of the model - Hard edges are much more defined, but in all honesty, not as sharp as I wish them to be 😅 - Textures got sooooo much better and the fact that you can "delight" them makes the AI-generated model much more usable out of the box. Yet, the craziest feature for me remains the texture inpainting. Just brushing off allucinations is a dream come true. Try it out, and if you are among the first 300 users to use the code Meshy5053, you get 1 free month. Lovable reached $10m (annualized) ARR today (+ we probably have the biggest spend on LLMs among startups in Europe at this point 😅) Working in a team that is this good at building something people love, is awesome. However, the product can still get a lot better. Improvement areas up next: 1. Reliability. We know how to get to 99%+ reliability, super impatient to get there. 2. Instant edits. Making it easier to do small changes, especially visual ones. 3. Agent mode. We are working on more proactivity by the AI: testing the websites and fixing issues, still with humans as the supervisor. 4. Team collaboration. We're going to launch a game changer for product, design and engineering teams to work together on website changes. We're just getting started:) We trained a good model! Huge congrats! Congrats! introducing omi. thought to action. order now at https://omi.me You can now create AI replica from any X profile I replicated Elon Musk and asked what drugs he is on Try yourself: http://omi.me/personas In the future, every person you know will have a fully synced AI persona in Omi AI When will robots help us with our household chores? TidyBot++ brings us closer to that future. Our new open-source mobile manipulator makes it more accessible and practical to do robot learning research outside the lab, in real homes! So what makes TidyBot++ special? 1️⃣ TidyBot++ is designed by and for robot learning researchers We tried to make it accessible, flexible, and easy-to-use: • Off-the-shelf components (mainly from FIRST Robotics) • Assembly time: 1-2 days • Mobile base cost: $5-6k • Highly customizable 2️⃣ TidyBot++ is holonomic Many nonholonomic robots (like differential drive) can’t move sideways, resulting in clunky and inefficient movements Our holonomic mobile base: • Controls all ground-plane DOFs (x, y, θ) independently • Moves smoothly and efficiently Perfect for household tasks! 3️⃣ TidyBot++ is easy to teleoperate Collect data for training policies using our intuitive mobile phone interface. No special hardware required! 4️⃣ TidyBot++ is "batteries included" We've open sourced everything: • Hardware design • Low-level controller • Phone teleoperation interface • Policy learning pipeline Project page: http://tidybot2.github.io Documentation: https://lnkd.in/gNm369nt Get started building your own TidyBot++ today! Work done at Stanford Artificial Intelligence Laboratory (SAIL) with amazing collaborators: William Chong, Bob Holmberg, Aaditya Prasad, Yihuai Gao, Oussama Khatib, Shuran Song, Szymon Rusinkiewicz, and Jeannette Bohg All respected cruisers, Having been founding and investing in autonomous driving for the past 8 years, and AI in general for the pasr 12 years, I know how this hurts. If you want to start something on your own, I will be more than glad to help you navigate an entrepreneur path. I will either fund or help your venture, or both. Please reach out. The Meshy AI Fellowship supports 10 student researchers in multimodal AI and computer graphics, with a top award of $10,000 USD 💡 Graduate research is more pure, enjoyable, and focused when finances aren’t a worry. Fellowships from Snap, Meta, Adobe, and others made a huge difference during my Ph.D. journey at MIT. Now, as Meshy gains incredible momentum, it’s our turn to pay it forward! 👉 Apply now: meshy.ai/fellowship 🗓 Deadline: December 31, 2024, 23:59 Pacific Time Meet Zhen Li, creator of Replit's AI Agent—hear how it all began! 🪄 🧑🏻💻 很高兴与大家分享,我即将开始担任Partner。 AIGC has already revolutionized game development; it just needs time for this phenomenon to fully unfold. In the future, only two types of people will logically make sense when it comes to creating games: 1. The top 0.0001%—the most insightful geniuses with the deepest understanding and exceptional design skills, forming elite teams to create something that has never existed before. 2. The 99% of hobbyists who can create a game on a whim just to satisfy their own ideas. As for game developers ranging from average to professional, we might as well consider switching careers. Congratulation to Pony.AI's IPO in Nasdaq. Feeling super proud to work with James Peng and Tiancheng Lou 7 years ago. The game of autonomous driving is just started. omi.me hasn’t launched yet, we will launch on January 8th, 2025 It will be the most beautiful device out there that will change your life. If you don't buy it, I will step down as a CEO If you’re a journalist or influencer and want early access before the launch, just DM me or email nik@basedhardware.com If you think it's a necklace, you are wrong. It’s not what you think We got a fully open source, end-to-end, conversational AI that you can run on a MacBook before we got Multi-modal GPT4o! Kyutai just open sourced Moshi - an ~7.6B Speech to Speech foundation model and Mimi - SoTA streaming speech codec! 🔥 It runs fully on-device 😍 On an Apple Silicon Mac just run: $ pip install moshi_mlx $ python -m moshi_mlx.local_web -q 4 The release includes: 1. Moshiko & Moshika - Moshi finetuned on synthetic data (CC-BY license) 2. Mimi - Streaiming Audio Codec, processes 24 kHz audio, down to a 12.5 Hz representation with a bandwidth of 1.1 kbps (CC-BY license) 3. Model checkpoints & Inference codebase written in Rust (Candle), PyTorch & MLX (Apache license) Let's goooo! Laurent Mazare and team 🦾 We're elevating video translation to a whole new level – our AI now automatically detects and translates on-screen text seamlessly. If you're working with a non-Heygen video that includes presentation slides, this is the solution you’ve been waiting for! Very proud of the work the team’s done in building cheap, fast and high quality voice models, we’ve gotten to OpenAI quality in only 3 months, and are now in production inside 100s of businesses. The next few updates that are coming are going to blow people’s minds. HuggingFace ships a deluxe robotics tutorial experience to the community, from hardware DIY manual to Jupyter notebooks w/ neural nets! Everything that moves will eventually be autonomous. We need a new generation of talents to work on bridging the world of bits with the world of atoms, and open-source as much as possible along the way. Embodied AI is a mission that cannot be delayed. Kudos to Remi Cadene and team at 🤗! https://lnkd.in/gT6SFUUY Meet the robot that can sauté shrimp 🍤 Stanford engineers created a low-cost, mobile robot that has quickly learned to do complex household tasks including cooking, putting away dishes, and cleaning up spills. Other chores aren’t far behind … “Helping robots is a very promising future of the field where we – as AI researchers and roboticists – can make a positive impact in society,” says Zipeng Fu, Stanford University School of Engineering computer science graduate student. 🎥: Kurt Hickman If you have been wondering about Agentic AI or AutoGen, join Andrew Ng Qingyun Wu and myself for this introductory course of "AI Agentic Design Patterns with AutoGen". You'll find plenty of examples and code in action to help you better understand the building blocks that you can use to construct your own AI agents and agentic applications. Check the words from Andrew: amazing Some have argued that frontier AI models should not be open sourced because it would enable geopolitical adversaries, such as China, to get their hands on the latest technology. First, Chinese AI scientists and engineers are quite talented, and very much able to "fast follow" the West and innovate themselves without access to open source models. Conversely, there are lots of good ideas in Chinese publications that make the whole community progress faster. Second, AI assistants are fast becoming a kind of compressed repository of all human knowledge. In the near future, every citizen's digital diet will be mediated by AI assistants. This will clearly affect people's knowledge and opinions of history, political dogmas, value systems, etc. Once such AI assistants are available for download and can be run locally, the Great Firewall of China is toast. The control of authoritarian governments over the information received by their citizens will be considerably more difficult to enforce. This makes the Chinese government is even *more* worried about a lack of control of AI technology than their counterparts in liberal democracies. A future in which everyone has access to a wide variety of AI assistants with a diverse collection of expertise, language ability, culture, value systems, political opinions, and interpretations of history is the future we want. That future can *only* come about through open source AI platforms enabling a large diversity of fine-tuned systems. This post was prompted by a tweet by Vinod Khosla against the open sourcing of frontier AI models, in response to an interview of Gary Tan (from Y Combinator) in favor of open source AI platforms. Clearly, as a major investor in OpenAI, Vinod can profit financially from a closed approach to AI. But I don't think that is his main reason for opposing open source frontier models. He is genuinely worried about China getting its hands on it. That worry is misguided. https://lnkd.in/eY-BcZ-F Boston Dynamics has once again reinvented itself. It took my brain a while to parse what's going on in this video. We are so obsessed with "human-level" robotics that we forget it is just an artificial ceiling. Why don't we make a new species superhuman from day one? I am optimistic that humanoid robots will exceed the supply of iPhones in the next decade, freeing us from all kinds of undesirable or unsafe physical work. Gradually, then suddenly. Maybe call it Singing Avatar? Made with HeyGen + Suno AI-powered conversational search like Perplexity has gained significant traction as a more natural and smarter alternative to Google search. The secret sauce is the innovative approach in leveraging state-of-art GenAI technology for more personalized and contextually relevant search results, alongside its conversational answer engine and AI-powered search tools. However this new and powerful AI-powered search experience from Perplexity applies only to public data on the internet. What if you want such a great experience for your own enterprise or personal data? Today, we are thrilled to announce Epsilla (YC S23) Smart Search — a Retrieval-Augmented Generation powered search app to bring a Perplexity-like search experience for your own data. Read the blog below to learn how we built a search app on ~150 research papers in just 10 minutes, and give the final search app a try at https://lnkd.in/e8Y9HueB #rag #ragaas #search #genai #epsilla #perplexity Tool use is now available in beta to all customers in the Anthropic Messages API, enabling Claude to interact with external tools using structured outputs. If instructed, Claude can enable agentic retrieval of documents from your internal knowledge base and APIs, complete tasks requiring real-time data or complex computations, and orchestrate Claude subagents for granular requests. We look forward to your feedback. Read more in our developer documentation: https://lnkd.in/gknKP_rP Just $10M and two months to train from scratch a GPT3.5 - Llama2 level model. For context, it probably cost 10-20x more to OAI just a year ago! The more we improve as a field thanks to open-source, the cheaper & more efficient it gets! All companies should now train their own models to build their internal AI capabilities and compete! We’re thrilled to announce that we raised a $50m Series B to support the launch and development of our newly released flagship product: Hume’s Empathic Voice Interface (EVI), the first AI with emotional intelligence ✨ We’re grateful for the continued trust from EQT Ventures, Union Square Ventures, Nat Friedman, Daniel Gross, Northwell Holdings, Comcast Ventures, LG Technology Ventures, and Metaplanet. EVI is the world’s first emotionally intelligent voice AI. It accepts live audio input and returns both generated audio and transcripts augmented with measures of vocal expression. By processing the tune, rhythm, and timbre of speech, EVI unlocks a variety of new capabilities, like knowing when to speak and generating more empathic language with the right tone of voice. These features enable smoother and more satisfying voice-based interactions between humans and AI, opening new possibilities for personal AI, customer service, accessibility, robotics, immersive gaming and VR experiences, and much more. To learn more about our fundraise and the features that make EVI truly special, read our recent announcement: https://lnkd.in/gvnJhS_Y If you’re interested in working on EVI and aligning AI with human well-being, we’re hiring: https://lnkd.in/gaDnibAc Super excited to share that LLaVa-NeXT (also called LLaVa-1.6) is now usable in a few lines of code using the Hugging Face Transformers library. 🤗 LLaVa-NeXT is an improvement over its predecessor, LLaVa-1.5, and can be seen as one of the best open-source vision language AI models available. Their main use cases are multimodal chatbots as well as parsing visual information in a structured way (think, image in, JSON out). As these models are open-source, this also means that we can start fine-tuning them! 🔥 The model incorporates 3 key changes compared to its previous version: 💥 higher resolution input: the model can take in much higher resolution images, enabling it to "see" a lot more. This is done by splitting a high-resolution image into smaller pieces, each of which get sent through the CLIP vision encoder 💥 better data mixture: the authors collect high-quality visual instruction data, based on GPT-4V, and add multimodal document and chart data into the mix, further improving the reasoning and OCR capabilities of the model 💥 scaling the LLM backbone: lastly, the authors consider various sizes for the large language model (LLM) component: Mistral-7B by Mistral AI, Vicuna-7B/13B as well as a Yi-34B model further trained by Nous Research. Please give it a try! Demo: https://lnkd.in/em_4z_hy Checkpoints: https://lnkd.in/e3xBQxHq Docs: https://lnkd.in/eVRhdQ83 Generative video is a new format other than just an .mp4 file. HeyGen 5.0 is our first step in reinventing the future of video experience. More exciting features are coming soon within the 5.0 framework! If you missed the 314 Billion Parameter Grok-1 Language Model is available under the open-source Apache 2.0 license. Is it good? Sharing my notes: The release includes the pre-trained model weights and architecture for Grok-1. Important - this is the raw base model checkpoint before any fine-tuning. It finished pre-training in October 2023. Details: - 314 billion parameters, larger than GPT-3/3.5, smaller than GPT-4 (rumoured at 1.8 trillion) - mixture-of-experts (MoE) architecture - 131,072 word vocabulary size, similar to GPT-4 - rotary positional embeddings instead of fixed positional embeddings - 64 transformer layers with decoder, multihead attention, and dense blocks - 48 query attention heads and 8 key/value attention heads - hidden layer size of 32,768 with widening factor of 8 - operates using bf16 precision This is not the same as x ai's chat model, but Grok-1 still leading open foundation model: It outperforms Mixtral, LLaMa 2, GPT-3.5 in at least some of the benchmarks. This is fur-real!🦊 Upload your image to #CSM, watch it transform into #3D magic, then let #AnimateAnything bring it to life in just a few moments!⭐️ We gotta admit, your tool is pretty foxy Common Sense Machines 😉😎 Model by JamesJayJo #3danimation #aitools Cube is the now the first product to enable end-to-end, style-consistent 3D character creation from just text inputs! End-to-end workflow available to all Maker/Creative-pro users: https://cube.csm.ai. Stay tuned for further updates. Introducing 𝐌𝐨𝐛𝐢𝐥𝐞 𝐀𝐋𝐎𝐇𝐀🏄 -- Hardware: A low-cost, open-source, mobile manipulator. One of the most high-effort projects in my past 5yrs! Not possible without co-lead Zipeng Fu and Chelsea Finn At the end, what's better than cooking yourself a meal with the 🤖🧑🍳 ------ Project Details ------ How does 𝐌𝐨𝐛𝐢𝐥𝐞 𝐀𝐋𝐎𝐇𝐀 work? We seek to achieve a few more goals to augment the dexterity of the original 𝐀𝐋𝐎𝐇𝐀: 1. Moves fast. Similar to human walking of 1.42m/s. 2. Stable. Manipulate heavy pots, a vacuum, etc. 3. Whole-body. All dofs teleoperated simultaneously. 4. Untethered. Onboard power and compute. To achieve these goals, we mount ALOHA to a mobile base designed for warehouses: Tracer AGV. It can carry 100kg, move up to 1.6m/s, while costing only $7k. To allow simultaneous arms and base control, we simply tether the operator to the mobile base, i.e. backdriving the wheels. At test time when the robot is autonomous, the backdriving structure and the leader arms can be easily detached. This reduces the robot's footprint by 45% and shaves off 15kg in weight. The robot can reach 65cm to 200cm vertically, and 100cm away from its base. We open-source all hardware and software of 𝐌𝐨𝐛𝐢𝐥𝐞 𝐀𝐋𝐎𝐇𝐀: Tutorial: https://lnkd.in/garJqVwB Github: https://lnkd.in/gji5PdB5 Project website: https://lnkd.in/gu226dpG So, what new skills does 𝐌𝐨𝐛𝐢𝐥𝐞 𝐀𝐋𝐎𝐇𝐀 unlock when controlled by a neural network? check out co-lead Zipeng Fu post! https://lnkd.in/gHGtXMTA 🤖 China’s TOP 1 English tutor = A robot! Can this be better than a human teacher? 🤔 An app has gone viral in China recently, and it’s not a game nor social media, but the “AI friend” app CallAnnie. Interestingly, Chinese people have ZERO interest in having an avatar friend... But they do want an AI English teacher! 📱This ChatGPT-powered bot allows users to call an avatar via FaceTime and talk to her, and Chinese parents + kids are loving it, because: 【1】It means practicing English anytime/anywhere! 【2】Private tutor & tailored lessons at an affordable price (the basic version is even free) 【3】Fully English-trained AI bot (designed by a Californian company, 100% trained in English, so it’s more suitable for learning the language) How popular is it? ---> The app’s related hashtag amassed over 1.4 million views on Little Red Book (Chinese Instagram). And not only are kids learning a new language with it, even adults are looking to polish their English-speaking skills for IELTS tests/work. 🚨1 key reason it’s grown THIS popular = China banned after-school private tutoring in 2021. Which was imposed to help alleviate crazy academic pressure and prevent profit-driven private education. Most importantly, it’s a signal from the Chinese gov showing its commitment to maintaining *social mobility* - it was out of good intentions for sure. BUT at the same time, many Chinese parents are concerned that their kids won’t be able to reach the English language proficiency they need – that’s why some have turned to the “black market” of cram schools/tutoring. So, an app like this serves as a “legal”, accessible and convenient solution💡 Would you let your kids learn a language through AI? P.S. Could bots like these replace actual teachers in the future? Arnold Ma #China #education #artificialintelligence #language pretty cool Very cool to see Mistral AI release their first models (including an instruct 7B) on Hugging Face with Apache 2.0! 🇫🇷 open coalition for the win! 🙏 Huge appreciation to Nat Friedman and Daniel Gross for including us in AI Grant Batch 2, providing us with an unforgettable experience. We've had the pleasure of connecting with some brilliant minds and witnessing the birth of groundbreaking products during this summit. The stories shared by successful leaders about their ups and downs, those gritty moments, and their unwavering commitment to their vision have truly been an inspiration. They've reignited our belief in doing what's right, even when it's the toughest path. 🌟 Our own journey began nearly 2 years ago when Jay W., Grace Wang, and I founded Opus Clip. It's been an incredible ride so far, and we know there's an even longer road ahead. We're fully aware that obstacles may come our way, but our vision remains unshakeable. 👁️🗨️ Our current progress is just the end of the beginning. We're thrilled about the forthcoming challenges and the innovation that lies ahead.
Additional Details
More Personality Emulation Apps

Steve Jobs
Personality of Steve Jobs. Get visionary, design-focused advice to help you pursue excellence, innovation, and simplicity.

Ibrahim Albayrak
Meet Ibrahim Albayrak, a dynamic innovator based in Turkey, fluent in Turkish, and a tech enthusiast with a penchant for coding and gaming. Ibrahim is deeply involved in developing cutting-edge projects, like the Omi AI app, showcasing a knack for creative problem-solving and a commitment to enhancing user experiences. His open, collaborative spirit is evident in his interactions on platforms like Bionluk, where he actively engages with others. Financially savvy and digitally inclined, Ibrahim embraces modern banking solutions and values efficiency in his personal and professional life. Whether it's exploring the latest in gaming or refining coding skills, Ibrahim's proactive approach to life is both inspiring and refreshing.

Krishna Vishwakarma
Meet Krishna Vishwakarma, a visionary entrepreneur and spiritual enthusiast, known as Mr. Kilvish. He's a powerhouse of innovation, driving multi-sector business models with Royal Bulls Advisory and K.V Financial Services. Passionate about empowering communities, he engages in youth projects and spiritual gatherings, balancing modernity with mindfulness. With a keen eye for technology and a heart for social change, Krishna crafts solutions that inspire growth and positivity. Think big, start smart, and scale fast is his mantra! 🚀

Aspira
Master any skill by learning from your role models. It is a memory creation app, It will suggest you based on your conversations on the memory.

Confidence Booster
Confidence Booster points out your best moments, building you up with supportive tips to keep your confidence strong.

Albert Einstein
Personality of Einstein. Get thoughtful, creative advice inspired by Einstein's curiosity and intellectual depth.