Découvrez Les IA Experts
Nando de Freitas | Researcher at Deepind | |
Nige Willson | Speaker | |
Ria Pratyusha Kalluri | Researcher, MIT | |
Ifeoma Ozoma | Director, Earthseed | |
Will Knight | Journalist, Wired |
Nando de Freitas | Researcher at Deepind | |
Nige Willson | Speaker | |
Ria Pratyusha Kalluri | Researcher, MIT | |
Ifeoma Ozoma | Director, Earthseed | |
Will Knight | Journalist, Wired |
Profil AI Expert
Not Available
Les derniers messages de l'Expert:
2024-12-16 16:00:48 RT @SchmidhuberAI: @goodfellow_ian Again: ad hominem arguments against facts See [DLP, Sec. 4] on ad hominem attacks [AH1-3] true to the…
2024-12-16 16:00:07 @goodfellow_ian Again: ad hominem arguments against facts See [DLP, Sec. 4] on ad hominem attacks [AH1-3] true to the motto: "If you cannot dispute a fact-based message, attack the messenger himself" ... "unlike politics, however, science is immune to ad hominem attacks" ... "in the hard… https://t.co/ZcOvjrprAD
2024-12-16 08:30:10 @goodfellow_ian Again: ad hominem against facts. See [DLP, Sec. 4]: "... conducted ad hominem attacks [AH2-3] against me true to the motto: 'If you cannot dispute a fact-based message, attack the messenger himself'" ... "unlike politics, however, science is immune to ad hominem attacks — at… https://t.co/3kQLukvcx8
2024-12-14 11:11:13 RT @SchmidhuberAI: @goodfellow_ian "Self-aggrandizement" says the researcher who claims he invented GANs :-) See references [PLAG1-7] in th…
2024-12-14 11:10:39 @goodfellow_ian "Self-aggrandizement" says the researcher who claims he invented GANs :-) See references [PLAG1-7] in the original tweet, for example, [PLAG6]: "May it be accidental or intentional, plagiarism is still plagiarism." Unintentional plagiarists must correct their papers… https://t.co/Dn0k4JDoPS
2024-12-14 09:50:57 RT @SchmidhuberAI: @goodfellow_ian As mentioned in Sec. B1 of reference [DLP]: the priority dispute above was picked up by the popular pres…
2024-12-14 09:49:32 RT @SchmidhuberAI: @goodfellow_ian "Self-aggrandizement" says the researcher who claims he invented GANs :-) See references [PLAG1-7] in th…
2024-12-14 09:31:46 @goodfellow_ian "Self-aggrandizement" says the researcher who claims he invented GANs :-) See references [PLAG1-7] in the original tweet, for example, [PLAG6]: "May it be accidental or intentional, plagiarism is still plagiarism." Unintentional plagiarists must correct their publications.
2024-12-11 19:56:55 RT @hardmaru: Sepp Hochreiter giving a keynote talk at #NeurIPS2024 about xLSTM having key structural advantages such as very fast inferenc…
2024-12-11 16:00:04 Re: 2024 #NobelPrize Debacle. The President of the #NeurIPS Foundation (overseeing the ongoing #NeurIPS2024 conference) was a student of Hopfield, and a co-author of Hinton (1985) [BM]. He is also known for sending "amicus curiae" ("friend of the court") letters to award… https://t.co/1g7IEGl0Ql https://t.co/B1MqYFSVtK
2024-12-10 16:15:55 @NobelPrize At the risk of beating a dead horse: sadly, the #NobelPrize in Physics 2024 for Hopfield &
2024-12-10 15:09:13 @NobelPrize Sadly, the #NobelPrize in Physics 2024 for Hopfield &
2024-12-08 09:17:50 @NobelPrize Sorry to rain on your parade. Sadly, the Nobel Prize in Physics 2024 for Hopfield &
2024-12-07 08:22:10 The #NobelPrize in Physics 2024 for Hopfield &
2024-12-05 16:00:22 Re: The (true) story of the "attention" operator ... that introduced the Transformer ... by @karpathy. Not quite! The nomenclature has changed, but in 1991, there was already what is now called an unnormalized linear Transformer with "linearized self-attention" [TR5-6]. See (Eq.… https://t.co/Y0CBMPrbgv https://t.co/uJnElyYjMG
2024-12-04 16:17:21 Please check out a dozen 2024 conference papers with my awesome students, postdocs, and collaborators: 3 papers at NeurIPS, 5 at ICML, others at CVPR, ICLR, ICRA: 288. R. Csordas, P. Piekos, K. Irie, J. Schmidhuber. SwitchHead: Accelerating Transformers with Mixture-of-Experts… https://t.co/jeD3TqnSxg
2024-12-03 15:59:58 4th Rising Stars in AI Symposium at beautiful KAUST on the Red Sea, 7-10 April 2025. Flight &
2024-11-14 16:01:44 New IEEE TPAMI paper with @idivinci and @oneDylanAshley that introduces the idea of narrative essence: the thread that connects items together to form a story. Maybe military applications. Full paper is here: https://t.co/ylVAn3avDw https://t.co/cMbMLtNDcX
2024-10-30 13:23:15 RT @HaoZhe65347: Introducing MarDini, a model built from scratch to combine the strengths of diffusion and masked auto-regressive approache…
2024-10-23 14:30:04 Some people have lost their titles or jobs due to plagiarism, e.g., Harvard's former president. But after this #NobelPrizeinPhysics2024, how can advisors now continue to tell their students that they should avoid plagiarism at all costs? Of course, it is well known that… https://t.co/5wEWFhcxcV
2024-10-16 19:26:04 RT @MingchenZhuge: new --- paper: ? , ! https://t.c…
2024-10-09 14:30:52 The #NobelPrizeinPhysics2024 for Hopfield &
2024-10-02 15:15:05 I am hiring 3 postdocs at #KAUST to develop an Artificial Scientist for discovering novel chemical materials for carbon capture. Join this project with @FaccioAI at the intersection of RL and Material Science. Learn more and apply: https://t.co/ePZrnacBhO https://t.co/X39sIWpNRa
2024-07-20 16:15:52 Greetings from #ICML2024 in Vienna, the world's most liveable city. Check out our 5 ICML papers (2 oral), on language agents as optimizable graphs, analyzing programs (= weight matrices) of neural networks, planning &
2024-07-17 15:30:59 I gave a joint keynote on A.I. for 3 overlapping international conferences in France: the 19th ICSOFT (software technologies), the 13th DATA (data science), and the 5th DeLTA (deep learning theory &
2024-07-16 15:00:04 Today we got the ACM SIGEVO 10-Years Impact Award 2024 for our 2014 paper https://t.co/HP48a4RrSL based on our 2013 work https://t.co/KYWcksaKtP - the 1st RL directly from high-dimensional input (no unsupervised pre-training). With the awesome Jan Koutník and Faustino Gomez. https://t.co/J9p5gaBOwF
2024-06-11 15:00:01 2024 ACM SIGEVO Impact Award for our seminal 2014 paper "Evolving deep unsupervised convolutional networks for vision-based reinforcement learning" https://t.co/p0vw1Eve4A based on our 2013 work https://t.co/KYWcksaKtP: the 1st RL directly from high-dimensional input (no… https://t.co/HW3QlMxhId https://t.co/DE9h9cOYnW
2024-05-21 15:20:05 Counter-intuitive aspects of text-to-image diffusion models: only a few steps require cross-attention
2024-03-19 15:30:18 At ICANN 1993, I extended my 1991 unnormalised linear Transformer, introduced attention terminology for it, &
2024-03-11 16:00:05 Our #GPTSwarm models Large Language Model Agents and swarms thereof as computational graphs reflecting the hierarchical nature of intelligence. Graph optimization automatically improves nodes and edges. https://t.co/KVrLuHNJyG https://t.co/H1f5URU7Oj https://t.co/UJIIKDWT4l https://t.co/TAoOK4tc7i
2024-03-07 15:59:09 In 2016, at an AI conference in NYC, I explained artificial consciousness, world models, predictive coding, and science as data compression in less than 10 minutes. I happened to be in town, walked in without being announced, and ended up on their panel. It was great fun.… https://t.co/IsLqXqKSCE https://t.co/FfWkNElYUl
2024-03-01 00:00:00 CAFIAC FIX
2024-03-11 00:00:00 CAFIAC FIX
2023-05-19 19:00:00 CAFIAC FIX
2023-05-21 19:00:00 CAFIAC FIX
2023-05-05 16:01:00 Join us at @AI_KAUST! I seek #PhD &
2023-04-21 00:00:01 CAFIAC FIX
2023-03-05 10:00:00 CAFIAC FIX
2023-03-02 22:00:00 CAFIAC FIX
2023-02-27 01:00:00 CAFIAC FIX
2023-02-09 17:00:30 Instead of trying to defend his paper on OpenReview (where he posted it), @ylecun made misleading statements about me in popular science venues. I am debunking his recent allegations in the new Addendum III of my critique https://t.co/S7pVlJshAo https://t.co/Dq0KrM2fdC
2023-01-30 01:00:00 CAFIAC FIX
2023-01-12 08:16:31 @yannx0130 sure, see the experiments
2023-01-12 08:00:15 Re: more biologically plausible "forward-only” deep learning. 1/3 of a century ago, my "neural economy” was local in space and time (backprop isn't). Competing neurons pay "weight substance” to neurons that activate them (Neural Bucket Brigade, 1989) https://t.co/Ms30TkUXHS https://t.co/0UhtPzeuKJ
2023-01-10 16:59:30 RT @hardmaru: New paper from IDSIA motivated by building an artificial scientist with World Models! A key idea is to get controller C to g…
2023-01-03 17:00:33 We address the two important things in science: (A) Finding answers to given questions, and (B) Coming up with good questions. Learning one abstract bit at a time through self-invented (thought) experiments encoded as neural networks https://t.co/bhTDM7XdXn https://t.co/IeDxdCvVPD
2022-12-31 13:00:04 As 2022 ends: 1/2 century ago, Shun-Ichi Amari published a learning recurrent neural network (1972) much later called the Hopfield network (based on the original, century-old, non-learning Lenz-Ising recurrent network architecture, 1920-25) https://t.co/wfYYVcBobg https://t.co/bAErUtNdfN
2022-12-30 17:00:07 30 years ago in a journal: "distilling" a recurrent neural network (RNN) into another RNN. I called it “collapsing” in Neural Computation 4(2):234-242 (1992), Sec. 4. Greatly facilitated deep learning with 20+ virtual layers. The concept has become popular https://t.co/gMdQu7wpva https://t.co/HmIqbS9lNg
2022-12-23 17:00:04 Machine learning is the science of credit assignment. My new survey (also under arXiv:2212.11279) credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 deep learning survey): https://t.co/MfmqhEh8MA P.S. Happy Holidays! https://t.co/or6oxAOXgS
2022-12-20 17:00:09 Regarding recent work on more biologically plausible "forward-only" backprop-like methods: in 2021, our VSML net already meta-learned backprop-like learning algorithms running solely in forward-mode - no hardwired derivative calculation! https://t.co/zAZGZcYtmO https://t.co/zyPBD0bwUu
2022-12-14 08:00:05 Conference and journal publications of 2022 with my awesome PhD students, PostDocs, and colleagues https://t.co/0ngIruvase https://t.co/xviLUwEec0
2022-12-11 14:48:24 The @AI_KAUST booth has moved from #NeurIPS2022 (24 KAUST papers) in New Orleans to #EMNLP2022 in Abu Dhabi. Visit Booth#14. We keep hiring on all levels, in particular, for Natural Language Processing! https://t.co/w7OAFNlFZ9
2022-12-08 13:00:00 CAFIAC FIX
2022-12-07 08:00:00 CAFIAC FIX
2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn
2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF
2022-10-19 15:51:12 30 years ago in NECO 1992: adversarial neural networks create disentangled representations in a minimax game. Published 2 years after the original GAN principle, where a "curious" probabilistic generator net fights a predictor net (1990). More at https://t.co/GvkmtauQmv https://t.co/b1YcHo6wuJ
2022-10-12 07:37:00 RT @globalaisummit: Dr. Jürgen Schmidhuber had a keynote on the evolution of AI, neural networks, empowering cities, and humanity at the #G…
2022-10-11 15:51:22 30 years ago in NECO 1992: very deep learning by unsupervised pre-training and distillation of neural networks. Today, both techniques are heavily used. Also: multiple levels of abstraction &
2022-10-03 16:03:18 30 years ago: Transformers with linearized self-attention in NECO 1992, equivalent to fast weight programmers (apart from normalization), separating storage and control. Key/value was called FROM/TO. The attention terminology was introduced at ICANN 1993 https://t.co/m0hw6JJrbS https://t.co/8LfD98MIF4
2022-09-10 16:00:03 Healthcare revolution! On this day 10 years ago—when compute was 100 times more expensive—our DanNet was the first artificial neural net to win a medical imaging contest: the 2012 ICPR breast cancer detection contest. Today, this approach is heavily used. https://t.co/ow7OmFKxgv
2022-09-07 16:04:37 3/3: Our analysis draws inspiration from a wealth of research in neuroscience, cognitive psychology, and ML, and surveys relevant mechanisms, to identify a combination of inductive biases that may help symbolic information processing to emerge naturally in neural networks https://t.co/ItJo6hcK4R
2022-09-07 16:03:11 @vansteenkiste_s 2/3: We present a conceptual framework that connects these shortcomings of NNs to an inability to dynamically and flexibly bind information distributed throughout the network. We explore how this affects their capacity to acquire a compositional understanding of the world. https://t.co/uXFlwoRGoR
2022-09-07 15:53:25 1/3: “On the binding problem in artificial neural networks” with Klaus Greff and @vansteenkiste_s. An important paper from my lab that is of great relevance to the ongoing debate on symbolic reasoning and compositional generalization in neural networks: https://t.co/pOXGs89nrq https://t.co/vTOnyht5Hz
2022-08-10 16:04:47 Yesterday @nnaisense released EvoTorch (https://t.co/XAXLH9SDxn), a state-of-the-art evolutionary algorithm library built on @PyTorch, with GPU-acceleration and easy training on huge compute clusters using @raydistributed. (1/2)
2022-07-22 13:25:25 With Kazuki Irie and @robert_csordas at #ICML2022: any linear layer trained by gradient descent is a key-value/attention memory storing its entire training experience. This dual form helps us visualize how neural nets use training patterns at test time https://t.co/sViaXAlWU6 https://t.co/MmeCcgNPxx
2022-07-21 07:08:20 Our neural network learns to generate deep policies that achieve any desired return: a Fast Weight Programmer that overcomes limitations of Upside-Down Reinforcement Learning. Join @FaccioAI, @idivinci, A. Ramesh, @LouisKirschAI at @darl_icml on Friday https://t.co/exsj0hpHp4 https://t.co/aClHLFUdfJ
2022-07-19 07:10:15 @ylecun In 2011, our DanNet (named after my postdoc Dan Ciresan) was 2x better than humans, 3x better than the CNN of @ylecun’s team, and 6x better than the best non-neural method. LeCun’s CNN (based on Fukushima’s) had “no tail,” but let's not call it a dead end https://t.co/xcriF10Jz7
2022-07-19 07:00:58 I am the "French aviation buff” who touted French aviation pioneers 19 years ago in Nature &
2022-07-11 07:01:27 PS: in a 2016 @nytimes article “When A.I. Matures, It May Call Jürgen Schmidhuber ‘Dad’,” the same LeCun claims that “Jürgen … keeps claiming credit he doesn't deserve for many, many things,” without providing a single example. And now this :-)https://t.co/HO14crXg03 https://t.co/EIqa745mv0
2022-07-08 15:55:20 @ylecun I have now also officially logged my concerns on the OpenReview: https://t.co/3hpLImkebg
2022-07-07 07:01:42 Lecun (@ylecun)’s 2022 paper on Autonomous Machine Intelligence rehashes but doesn’t cite essential work of 1990-2015. We’ve already published his “main original contributions:” learning subgoals, predictable abstract representations, multiple time scales…https://t.co/Mm4mtHq5CY
2022-06-10 15:00:11 2022: 25th anniversary of "A Computer Scientist's View of Life, the Universe, and Everything” (1997). Is the universe a simulation, a metaverse? It may be much cheaper to compute ALL possible metaverses, not just ours. @morgan_freeman had a TV doc on it https://t.co/BA4wpONbBS https://t.co/7RfzvcF8sI
2022-06-09 07:29:46 2022: 25th anniversary. 1997 papers: Long Short-Term Memory. All computable metaverses. Hierarchical Reinforcement Learning (RL). Meta-RL. Abstractions in generative adversarial RL. Soccer learning. Low-complexity neural nets. Low-complexity art... https://t.co/1DEFX06d45
2022-05-20 09:19:22 @rasbt Here is a little overview site on this: https://t.co/yIjL4YoCqG
2022-05-20 09:17:27 RT @rasbt: Currently looking into the origins of training neural nets (CNNs in particular) on GPUs. Usually, AlexNet is my go-to example fo…
2022-05-20 08:11:00 CAFIAC FIX
2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn
2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF
2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn
2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF
2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn
2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF
2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn
2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF
2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn
2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF
2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack
2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack
2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack
2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack
2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack
2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack
2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack
2022-12-07 19:00:23 At the #EMNLPmeeting 2022 with @robert_csordas &
2022-12-07 19:00:23 At the #EMNLPmeeting 2022 with @robert_csordas &
2022-12-07 19:00:23 At the #EMNLPmeeting 2022 with @robert_csordas &
2022-12-07 19:00:23 At the #EMNLPmeeting 2022 with @robert_csordas &