What can large language models [LLMs] do for prompt and quality sleep? All humans sleep. There are several aspects of sleep that, at varying rates, services could be provided. If any LLM model can provide sleep service, it could become a major path to profitability and a key service to humanity—amid sleep deficits and adjacent conditions.
Sleep is the kind of problem that LLMs can solve that would not just be a commercialization win, but a give back to society, against questions of broad swath of their training data.
What are the problems around sleep? How does a chatbot—using text, video, image, or audio—shape how to fall asleep faster, stay asleep, or wake up refreshed? What are the opportunities in basic sleep science that LLMs could use as a pedestal?
Sleep is rest. This does not mean the brain is at rest, so what is resting and what is not? When does this rest process set in? How does it stay at the right threshold to have the feeling of rest afterward? What is the difference between dreaming and dreamless sleep? Why do some states of mind prevent sleep? How can levels of sleep be graded differently from waves to shape the measures of sleep?
There is a new paper in Science, A hippocampal circuit mechanism to balance memory reactivation during sleep, stating that:
"Memory consolidation involves the synchronous reactivation of hippocampal cells active during recent experience in sleep sharp-wave ripples (SWRs). How this increase in firing rates and synchrony after learning is counterbalanced to preserve network stability is not understood. We discovered a network event generated by an intrahippocampal circuit formed by a subset of CA2 pyramidal cells to cholecystokinin-expressing (CCK+) basket cells, which fire a barrage of action potentials (“BARR”) during non–rapid eye movement sleep. CA1 neurons and assemblies that increased their activity during learning were reactivated during SWRs but inhibited during BARRs. The initial increase in reactivation during SWRs returned to baseline through sleep. This trend was abolished by silencing CCK+ basket cells during BARRs, resulting in higher synchrony of CA1 assemblies and impaired memory consolidation."
Instead of exploring sleep with just nerve cells, LLMs could base the sleep service on electrical and chemical signals. Already, brain waves are tracks of electrical signals. The sleep service for LLMs could add chemical signals to them, exploring configurations and properties of those in sets, to present sleep in a different light.
Every clinic in the world could have this service subsumed, especially with the infusion of digital health, giving LLMs a key usage and income boost, aiding health outcomes.
II.
Machine translation may not do much for cognitive benefit. The ability to have the same voice speak in several languages may be useful to productivity but not to human intelligence. Learning a second language for several adults is hard, not because words and expressions cannot be picked up, but because retention is often slippery. How can LLMs be used to break the problem of retention, hence making the will to learn and the possibility of knowing a new language almost effortless? News.
Pair every news article with a known language and a language that is desired to learn. This means that line-by-line, paragraph-by-paragraph, and word-for-word, especially if the languages share similar figures and alphabets. If they don't, it might be necessary to have some knowledge of those in the other language, but if they do, it is possible just to plunge in directly.
Why news? News is principally for information. It is not meant to be memorized, neither does it take much to be understood. It is so relatable that it is easy to recall some of the major stories. News does not require heavy lifting into memory, neither does it require absolute accuracy if it is to be recalled by a reader. This implies that there is no anxiety when dealing with news, so it is straightforward, most of the time, for most people with basic literacy and numeracy.
News makes it possible to build familiarity with a second language without getting in the mood to learn. Presenting news in two languages would make it possible to start picking up words, seeing patterns, and retaining those words after a while. It could provide a solid foundation in the second language, such that if it is to be picked up more seriously, there is already a threshold of learning that has been surmounted. Also, with voice mode, pronunciations can be mastered as well.
Learning a second language can be helpful for cognitive strengths, as well as to bring new personal targets, solving how to know and remember.
There are several AI and news publisher deals. Digital news is increasingly being accessed through subscriptions. Expanding new offerings with subscriptions, especially with a dual language edition would become a new source of revenue and useful to progress.
There is a new free exchange in The Economist, Artificial intelligence is losing hype, stating that, "Since peaking last month the share prices of Western firms driving the ai revolution have dropped by 15%. A growing number of observers now question the limitations of large language models, which power services such as Chatgpt. Big tech firms have spent tens of billions of dollars on ai models, with even more extravagant promises of future outlays. Yet according to the latest data from the Census Bureau, only 4.8% of American companies use ai to produce goods and services, down from a high of 5.4% early this year. Roughly the same share intend to do so within the next year."
III.
Most humans write with just one hand—right or left. For those, one hand is dominant and the other is not. This is due to what is termed handedness, an aspect of [so to speak] laterality. This means that one hand is more enabled than the other. This makes the hand write more precisely and more balanced.
It is what made writing materials—pens and pencils—to be designed so lightly since writing is coordinated by the hand, such that with formal training by memory of how to write, writing is often linear and legible. This same thing applies to styluses and digital pens, where they can be used for just that purpose.
Writing with the non-dominant hand, with any pen, including digital, becomes awry and unsteady. This is not because what to write is not known, but that there is a lack of enablement [so to speak] for that hand.
How can this be changed? If a pen has tactile feedback, vibrating slightly for balance, against unsteadiness. Then, if the words, alphabets, figures, shapes and expressions from a language is provided in the pen, it could lead the hand—predicting what is intended to be written, so that the outcome is great.
Simply, have an LLM powered pen, with features on how different shapes, figures and alphabets are written in different languages, then have tactile feedback in the pen as well, so that when the non-dominant hand of an individual uses that pen, it can follow the pen, in writing clearly and steadily. What is to be written may also be stated in advance, making it easier to follow. And when this is done for some months, it is expected that the hand may be able to write by itself, with any normal pen.
Why might this be commercially successful? It would have motor skills and cognitive benefits, such that adapting the muscles of the non-dominant hand for specificity in a new direction would bear determinations for ability and exercise. Using the non-dominant hand can also be a way to learn by writing, where whatever is desired to learn, gets written down, over and over with the non-dominant hand.
This would be useful for older adults with susceptibility to certain conditions that may affect motor and cognitive skills, preparing to delay the outset or taper its depth. This may also have a gaming application to some. Tactile feedback is already used in controllers of gaming consoles This pen could be used as a means to just take a writing target then get it done. It would also bring writing back into style as digital pervades everything.
Training for the pen, from a base model, can be available in its internal chips. The novelty of the hardware will herald a new paradigm for human ability and would be a massive income magnet.
There is a recent feature on Vox.
IV
If LLMs can provide the most detailed information on where to sell, it could become one of the biggest contributions to commerce, as well as the economy. Business owners often seek marketplaces, and LLMs can be adapted to finding customers, with model fine-tuning of specifications for locations, events, supply, pricing, payment and so forth.
How? There are gas stations in most places around the world. The gas stations can become reference points for the current state of commerce in those areas. Staff or people around the gas station can provide detailed local information, which can then be used to prospect demand needs and purchasing possibilities around. There are also health centers across places too.
Some of them may also provide information. Aside from this current information, there are already enough online sources. Coupling all, as well as with special dates, seasons, projections, pooling opportunities in the supply chain, and so forth, entrepreneurs may access information on where to sell, locally or globally.
This would aid e- and regular commerce across levels, benefiting sellers and buyers. The goal is to ensure that price clusters can be found, where those who want can get where, as well as product types and so forth, especially with how certain products are scarce somewhere and abundant elsewhere. LLMs-powered marketplaces could deepen the usefulness of chatbots, aside from their use as agents.
There is a recent feature on CIO, AI agents loom large as organizations pursue generative AI value, stating that, "At a time when organizations are seeking to generate value from GenAI, multiagents hold perhaps the most promise for boosting operational productivity. The value that agents can unlock stems from their potential to automate complex use cases characterized by highly variable inputs and outputs—use cases that have historically been hard to automate."
AI
AI safety and alignment to human values is probably inclusive of AI benefits and commercial viability. The better useful AI is to people, the more it might be possible to combat misuses technically or to disallow it.
In a recent report by Goldman Sachs, Gen AI: Too Much Spend, Too Little Benefit?, stating that, "The promise of generative AI technology to transform companies, industries, and societies continues to be touted, leading tech giants, other companies, and utilities to spend an estimated ~$1tn on capex in coming years, including significant investments in data centers, chips, other AI infrastructure, and the power grid. But this spending has little to show for it so far beyond reports of efficiency gains among developers. The emergence of generative AI and related potential task automation could continue this pattern by boosting productivity growth, and, in turn, corporate profitability. But this will also depend on the distribution of the technology’s benefits between consumers, corporates, and governments."
There are several possibilities for AI to be commercially beneficial, as well as with the human mind.