We may have an adequate understanding of the human body in that, well, we invented aspirin and sequenced the genome, but researchers still find out new things about the humble homo sapien all of the time. Case in point? Scientists just discovered a previously unknown entity hanging out in the human gut and mouth. The researchers are calling these virus-like structures “obelisks”, due to their presumed microscopic shape.
These entities replicate like viruses, but are much smaller and simpler. Due to the minuscule size, they fall into the “viroid” class, which are typically single-stranded RNAs without a protein shell. However, most viroids are infectious agents that cause disease and it doesn’t look like that’s the case with these lil obelisks, as reported by Live Science.
So why are they inside of us and what do they do? That’s the big question. The discoverers at Stanford University, the University of Toronto and the Technical University of Valencia have some theories. They may influence gene activity within the human microbiome, though they also hang out in the mouth. To that end, they have been found using the common mouth-based bacterium Streptococcus sanguinis as a host. It’s been suggested that these viroids infect various bacteria in both the mouth and gut, though we don’t know why.
Some of the obelisks seem to contain instructions for enzymes required for replication, so they look to be more complex than your average viroid, as indicated by Science.In any event, there has been a “chicken and the egg” debate raging for years over whether viruses evolved from viroids or if viroids actually evolved from viruses, so further study could finally end that argument.
While we don’t exactly know what these obelisk sequences do, scientists have discovered just how prevalent they are in our bodies. These sequences are found in roughly seven percent of human gut bacteria and a whopping 50 percent of mouth bacteria. The gut-based structures also feature a distinctive RNA sequence when compared to the mouth-based obelisks. This diversity has led researchers to proclaim that they “comprise a class of diverse RNAs that have colonized, and gone unnoticed in, human, and global microbiomes.”
“I think this is one more clear indication that we are still exploring the frontiers of this viral universe,” computational biologist Simon Roux of the DOE Joint Genome Institute at Lawrence Berkeley National Laboratory told Science.
“It’s insane,” added Mark Peifer, a cell and developmental biologist at the University of North Carolina at Chapel Hill. “The more we look, the more crazy things we see.”
Speaking of frontier medicine, scientists also recently created custom bacteria to detect cancer cells and biometric implants that detect organ rejection after replacement surgery. The human body may be just about as vast and mysterious as the ocean, or even space, but we’re slowly (ever so slowly) unraveling its puzzles.
This article originally appeared on Engadget at https://www.engadget.com/scientists-discover-weird-virus-like-obelisks-in-the-human-gut-and-mouth-162644669.html?src=rss
We may have an adequate understanding of the human body in that, well, we invented aspirin and sequenced the genome, but researchers still find out new things about the humble homo sapien all of the time. Case in point? Scientists just discovered a previously unknown entity hanging out in the human gut and mouth. The researchers are calling these virus-like structures “obelisks”, due to their presumed microscopic shape.
These entities replicate like viruses, but are much smaller and simpler. Due to the minuscule size, they fall into the “viroid” class, which are typically single-stranded RNAs without a protein shell. However, most viroids are infectious agents that cause disease and it doesn’t look like that’s the case with these lil obelisks, as reported by Live Science.
So why are they inside of us and what do they do? That’s the big question. The discoverers at Stanford University, the University of Toronto and the Technical University of Valencia have some theories. They may influence gene activity within the human microbiome, though they also hang out in the mouth. To that end, they have been found using the common mouth-based bacterium Streptococcus sanguinis as a host. It’s been suggested that these viroids infect various bacteria in both the mouth and gut, though we don’t know why.
Some of the obelisks seem to contain instructions for enzymes required for replication, so they look to be more complex than your average viroid, as indicated by Science.In any event, there has been a “chicken and the egg” debate raging for years over whether viruses evolved from viroids or if viroids actually evolved from viruses, so further study could finally end that argument.
While we don’t exactly know what these obelisk sequences do, scientists have discovered just how prevalent they are in our bodies. These sequences are found in roughly seven percent of human gut bacteria and a whopping 50 percent of mouth bacteria. The gut-based structures also feature a distinctive RNA sequence when compared to the mouth-based obelisks. This diversity has led researchers to proclaim that they “comprise a class of diverse RNAs that have colonized, and gone unnoticed in, human, and global microbiomes.”
“I think this is one more clear indication that we are still exploring the frontiers of this viral universe,” computational biologist Simon Roux of the DOE Joint Genome Institute at Lawrence Berkeley National Laboratory told Science.
“It’s insane,” added Mark Peifer, a cell and developmental biologist at the University of North Carolina at Chapel Hill. “The more we look, the more crazy things we see.”
Speaking of frontier medicine, scientists also recently created custom bacteria to detect cancer cells and biometric implants that detect organ rejection after replacement surgery. The human body may be just about as vast and mysterious as the ocean, or even space, but we’re slowly (ever so slowly) unraveling its puzzles.
This article originally appeared on Engadget at https://www.engadget.com/scientists-discover-weird-virus-like-obelisks-in-the-human-gut-and-mouth-162644669.html?src=rss
If there’s one thing we can all agree upon, it’s that the 21st century’s captains of industry are trying to shoehorn AI into every corner of our world. But for all of the ways in which AI will be shoved into our faces and not prove very successful, it might actually have at least one useful purpose. For instance, by dramatically speeding up the often decades-long process of designing, finding and testing new drugs.
Risk mitigation isn’t a sexy notion but it’s worth understanding how common it is for a new drug project to fail. To set the scene, consider that each drug project takes between three and five years to form a hypothesis strong enough to start tests in a laboratory. A 2022 study from Professor Duxin Sun found that 90 percent of clinical drug development fails, with each project costing more than $2 billion. And that number doesn’t even include compounds found to be unworkable at the preclinical stage. Put simply, every successful drug has to prop up at least $18 billion waste generated by its unsuccessful siblings, which all but guarantees that less lucrative cures for rarer conditions aren’t given as much focus as they may need.
Dr. Nicola Richmond is VP of AI at Benevolent, a biotech company using AI in its drug discovery process. She explained the classical system tasks researchers to find, for example, a misbehaving protein – the cause of disease – and then find a molecule that could make it behave. Once they've found one, they need to get that molecule into a form a patient can take, and then test if it’s both safe and effective. The journey to clinical trials on a living human patient takes years, and it’s often only then researchers find out that what worked in theory does not work in practice.
The current process takes “more than a decade and multiple billions of dollars of research investment for every drug approved,” said Dr. Chris Gibson, co-founder of Recursion, another company in the AI drug discovery space. He says AI’s great skill may be to dodge the misses and help avoid researchers spending too long running down blind alleys. A software platform that can churn through hundreds of options at a time can, in Gibson’s words, “fail faster and earlier so you can move on to other targets.”
Dr. Anne E. Carpenter is the founder of the Carpenter-Singh laboratory at the Broad Institute of MIT and Harvard. She has spent more than a decade developing techniques in Cell Painting, a way to highlight elements in cells, with dyes, to make them readable by a computer. She is also the co-developer of Cell Profiler, a platform enabling researchers to use AI to scrub through vast troves of images of those dyed cells. Combined, this work makes it easy for a machine to see how cells change when they are impacted by the presence of disease or a treatment. And by looking at every part of the cell holistically – a discipline known as “omics” – there are greater opportunities for making the sort of connections that AI systems excel at.
Using pictures as a way of identifying potential cures seems a little left-field, since how things look don’t always represent how things actually are, right? Carpenter said humans have always made subconscious assumptions about medical status from sight alone. She explained most people may conclude someone may have a chromosomal issue just by looking at their face. And professional clinicians can identify a number of disorders by sight alone purely as a consequence of their experience. She added that if you took a picture of everyone’s face in a given population, a computer would be able to identify patterns and sort them based on common features.
This logic applies to the pictures of cells, where it’s possible for a digital pathologist to compare images from healthy and diseased samples. If a human can do it, then it should be faster and easier to employ a computer to spot these differences in scale so long as it’s accurate. “You allow this data to self-assemble into groups and now [you’re] starting to see patterns,” she explained, “when we treat [cells] with 100,000 different compounds, one by one, we can say ‘here’s two chemicals that look really similar to each other.’” And this looking really similar to each other isn’t just coincidence, but seems to be indicative of how they behave.
In one example, Carpenter cited that two different compounds could produce similar effects in a cell, and by extension could be used to treat the same condition. If so, then it may be that one of the two – which may not have been intended for this purpose – has fewer harmful side effects. Then there’s the potential benefit of being able to identify something that we didn’t know was affected by disease. “It allows us to say, ‘hey, there’s this cluster of six genes, five of which are really well known to be part of this pathway, but the sixth one, we didn’t know what it did, but now we have a strong clue it’s involved in the same biological process.” “Maybe those other five genes, for whatever reason, aren’t great direct targets themselves, maybe the chemicals don’t bind,” she said, “but the sixth one [could be] really great for that.”
In this context, the startups using AI in their drug discovery processes are hoping that they can find the diamonds hiding in plain sight. Dr. Richmond said that Benevolent’s approach is for the team to pick a disease of interest and then formulate a biological question around it. So, at the start of one project, the team might wonder if there are ways to treat ALS by enhancing, or fixing, the way a cell’s own housekeeping system works. (To be clear, this is a purely hypothetical example supplied by Dr. Richmond.)
That question is then run through Benevolent’s AI models, which pull together data from a wide variety of sources. They then produce a ranked list of potential answers to the question, which can include novel compounds, or existing drugs that could be adapted to suit. The data then goes to a researcher, who can examine what, if any, weight to give to its findings. Dr. Richmond added that the model has to provide evidence from existing literature or sources to support its findings even if its picks are out of left-field. And that, at all times, a human has the final say on what of its results should be pursued and how vigorously.
It’s a similar situation at Recursion, with Dr. Gibson claiming that its model is now capable of predicting “how any drug will interact with any disease without having to physically test it.” The model has now formed around three trillion predictions connecting potential problems to their potential solutions based on the data it has already absorbed and simulated. Gibson said that the process at the company now resembles a web search: Researchers sit down at a terminal, “type in a gene associated with breast cancer and [the system] populates all the other genes and compounds that [it believes are] related.”
“What gets exciting,” said Dr. Gibson, “is when [we] see a gene nobody has ever heard of in the list, which feels like novel biology because the world has no idea it exists.” Once a target has been identified and the findings checked by a human, the data will be passed to Recursion’s in-house scientific laboratory. Here, researchers will run initial experiments to see if what was found in the simulation can be replicated in the real world. Dr. Gibson said that Recursion’s wet lab, which uses large-scale automation, is capable of running more than two million experiments in a working week.
“About six weeks later, with very little human intervention, we’ll get the results,” said Dr. Gibson and, if successful, it’s then the team will “really start investing.” Because, until this point, the short period of validation work has cost the company “very little money and time to get.” The promise is that, rather than a three-year preclinical phase, that whole process can be crunched down to a few database searches, some oversight and then a few weeks of ex vivo testing to confirm if the system’s hunches are worth making a real effort to interrogate. Dr. Gibson said that it believes it has taken a “year’s worth of animal model work and [compressed] it, in many cases, to two months.”
Of course, there is not yet a concrete success story, no wonder cure that any company in this space can point to as a validation of the approach. But Recursion can cite one real-world example of how close its platform came to matching the success of a critical study. In April 2020, Recursion ran the COVID-19 sequence through its system to look at potential treatments. It examined both FDA-approved drugs and candidates in late-stage clinical trials. The system produced a list of nine potential candidates which would need further analysis, eight of which it would later be proved to be correct. It also said that Hydroxychloroquine and Ivermectin, both much-ballyhooed in the earliest days of the pandemic, would flop.
And there are AI-informed drugs that are currently undergoing real-world clinical trials right now. Recursion is pointing to five projects currently finishing their stage one (tests in healthy patients), or entering stage two (trials in people with the rare diseases in question) clinical testing right now. Benevolent has started a stage one trial of BEN-8744, a treatment for ulcerative colitis that may help with other inflammatory bowel disorders. And BEN-8744 is targeting an inhibitor that has no prior associations in the existing research which, if successful, will add weight to the idea that AIs can spot the connections humans have missed. Of course, we can’t make any conclusions until at least early next year when the results of those initial tests will be released.
There are plenty of unanswered questions, including how much we should rely upon AI as the sole arbiter of the drug discovery pipeline. There are also questions around the quality of the training data and the biases in the wider sources more generally. Dr. Richmond highlighted the issues around biases in genetic data sources both in terms of the homogeneity of cell cultures and how those tests are carried out. Similarly, Dr. Carpenter said the results of her most recent project, the publicly available JUMP-Cell Painting project, were based on cells from a single participant. “We picked it with good reason, but it’s still one human and one cell type from that one human.” In an ideal world, she’d have a far broader range of participants and cell types, but the issues right now center on funding and time, or more appropriately, their absence.
But, for now, all we can do is await the results of these early trials and hope that they bear fruit. Like every other potential application of AI, its value will rest largely in its ability to improve the quality of the work – or, more likely, improve the bottom line for the business in question. If AI can make the savings attractive enough, however, then maybe those diseases which are not likely to make back the investment demands under the current system may stand a chance. It could all collapse in a puff of hype, or it may offer real hope to families struggling for help while dealing with a rare disorder.
This article originally appeared on Engadget at https://www.engadget.com/ai-is-coming-for-big-pharma-150045224.html?src=rss
Believe it or not, scientists have been using virtual reality setups to study brain activity in lab mice for years. In the past, this has been done by surrounding the mice with flat displays — a tactic that has obvious limitations for simulating a realistic environment. Now, in an attempt to create a more immersive experience, a team at Northwestern University actually developed tiny VR goggles that fit over a mouse’s face… and most of its body. This has allowed them to simulate overhead threats for the first time, and map the mice’s brain activity all the while.
The system, dubbed Miniature Rodent Stereo Illumination VR (or iMRSIV), isn’t strapped onto the mouse’s head like a VR headset for humans. Instead, the goggles are positioned at the front of a treadmill, surrounding the mouse’s entire field of view as it runs in place. “We designed and built a custom holder for the goggles,” said John Issa, the study’s co-first author. “The whole optical display — the screens and the lenses — go all the way around the mouse.”
In their tests, the researchers say the mice appeared to take to the new VR environment more quickly than they did with the past setups. To recreate the presence of overhead threats, like birds swooping in for a meal, the team projected expanding dark spots at the tops of the displays. The way they react to threats like this “is not a learned behavior; it’s an imprinted behavior,” said co-first author Dom Pinke. “It’s wired inside the mouse’s brain.”
With this method, the researchers were able to record both the mice’s outward physical responses, like freezing in place or speeding up, and their neural activity. In the future, they may flip the scenario and let the mice act as predators, to see what goes on as they hunt insects. A paper on the technique was published in the journal Neuron on Friday.
This article originally appeared on Engadget at https://www.engadget.com/researchers-made-vr-goggles-for-mice-to-study-how-their-brains-respond-to-swooping-predators-215927095.html?src=rss
Believe it or not, scientists have been using virtual reality setups to study brain activity in lab mice for years. In the past, this has been done by surrounding the mice with flat displays — a tactic that has obvious limitations for simulating a realistic environment. Now, in an attempt to create a more immersive experience, a team at Northwestern University actually developed tiny VR goggles that fit over a mouse’s face… and most of its body. This has allowed them to simulate overhead threats for the first time, and map the mice’s brain activity all the while.
The system, dubbed Miniature Rodent Stereo Illumination VR (or iMRSIV), isn’t strapped onto the mouse’s head like a VR headset for humans. Instead, the goggles are positioned at the front of a treadmill, surrounding the mouse’s entire field of view as it runs in place. “We designed and built a custom holder for the goggles,” said John Issa, the study’s co-first author. “The whole optical display — the screens and the lenses — go all the way around the mouse.”
In their tests, the researchers say the mice appeared to take to the new VR environment more quickly than they did with the past setups. To recreate the presence of overhead threats, like birds swooping in for a meal, the team projected expanding dark spots at the tops of the displays. The way they react to threats like this “is not a learned behavior; it’s an imprinted behavior,” said co-first author Dom Pinke. “It’s wired inside the mouse’s brain.”
With this method, the researchers were able to record both the mice’s outward physical responses, like freezing in place or speeding up, and their neural activity. In the future, they may flip the scenario and let the mice act as predators, to see what goes on as they hunt insects. A paper on the technique was published in the journal Neuron on Friday.
This article originally appeared on Engadget at https://www.engadget.com/researchers-made-vr-goggles-for-mice-to-study-how-their-brains-respond-to-swooping-predators-215927095.html?src=rss
An international team of scientists has developed a new technology that can help detect (or even treat) cancer in hard-to-reach places, such as the colon. The team has published a paper in Science for the technique dubbed CATCH, or cellular assay for targeted, CRISPR-discriminated horizontal gene transfer. For their lab experiments, the scientists used a species of bacterium called Acinetobacter baylyi. This bacterium has the ability to naturally take up free-floating DNA from its surroundings and then integrate it into its own genome, allowing it to produce new protein for growth.
What the scientists did was engineer A. baylyi bacteria so that they'd contain long sequences of DNA mirroring the DNA found in human cancer cells. These sequences serve as some sort of one-half of a zipper that locks on to captured cancer DNA. For their tests, the scientists focus on the mutated KRAS gene that's commonly found in colorectal tumors. If an A. baylyi bacterium finds a mutated DNA and integrates it into its genome, a linked antibiotic resistance gene also gets activated. That's what the team used to confirm the presence of cancer cells: After all, only bacteria with active antibiotic resistance could grow on culture plates filled with antibiotics.
While the scientists were successfully able to detect tumor DNA in mice injected with colorectal cancer cells in the lab, the technology is still not ready to be used for actual diagnosis. The team said it's still working on the next steps, including improving the technique's efficiency and evaluating how it performs compared to other diagnostic tests. "The most exciting aspect of cellular healthcare, however, is not in the mere detection of disease. A laboratory can do that," Dan Worthley, one of the study's authors, wrote in The Conversation. In the future, the technology could also be used for targeted biological therapy that can deploy treatment to specific parts of the body based on the presence of certain DNA sequences.
This article originally appeared on Engadget at https://www.engadget.com/scientists-genetically-engineer-bacteria-to-detect-cancer-cells-114511365.html?src=rss
Even today, popular Western culture toys with the idea of talking animals, though often through a lens of technology-empowered speech rather than supernatural force. The dolphins from both Seaquest DSV and Johnny Mnemonic communicated with their bipedal contemporaries through advanced translation devices, as did Dug the dog from Up.
We’ve already got machine-learning systems and natural language processors that can translate human speech into any number of existing languages, and adapting that process to convert animal calls into human-interpretable signals doesn’t seem that big of a stretch. However, it turns out we’ve got more work to do before we can converse with nature.
What is language?
“All living things communicate,” an interdisciplinary team of researchers argued in 2018’s On understanding the nature and evolution of social cognition: a need for the study of communication. “Communication involves an action or characteristic of one individual that influences the behavior, behavioral tendency or physiology of at least one other individual in a fashion typically adaptive to both.”
From microbes, fungi and plants on up the evolutionary ladder, science has yet to find an organism that exists in such extreme isolation as to not have a natural means of communicating with the world around it. But we should be clear that “communication” and “language” are two very different things.
“No other natural communication system is like human language,” argues the Linguistics Society of America. Language allows us to express our inner thoughts and convey information, as well as request or even demand it. “Unlike any other animal communication system, it contains an expression for negation — what is not the case … Animal communication systems, in contrast, typically have at most a few dozen distinct calls, and they are used only to communicate immediate issues such as food, danger, threat, or reconciliation.”
That’s not to say that pets don’t understand us. “We know that dogs and cats can respond accurately to a wide range of human words when they have prior experience with those words and relevant outcomes,” Dr. Monique Udell, Director of the Human-Animal Interaction Laboratory at Oregon State University, told Engadget. “In many cases these associations are learned through basic conditioning,” Dr. Udell said — like when we yell “dinner” just before setting out bowls of food.
Whether or not our dogs and cats actually understand what “dinner” means outside of the immediate Pavlovian response — remains to be seen. “We know that at least some dogs have been able to learn to respond to over 1,000 human words (labels for objects) with high levels of accuracy,” Dr. Udell said. “Dogs currently hold the record among non-human animal species for being able to match spoken human words to objects or actions reliably,” but it’s “difficult to know for sure to what extent dogs understand the intent behind our words or actions.”
Dr. Udell continued: “This is because when we measure a dog or cat’s understanding of a stimulus, like a word, we typically do so based on their behavior.” You can teach a dog to sit with both English and German commands, but “if a dog responds the same way to the word ‘sit’ in English and in German, it is likely the simplest explanation — with the fewest assumptions — is that they have learned that when they sit in the presence of either word then there is a pleasant consequence.”
Hush, the computers are speaking
Natural Language Programming (NLP) is the branch of AI that enables computers and algorithmic models to interpret text and speech, including the speaker’s intent, the same way we meatsacks do. It combines computational linguistics, which models the syntax, grammar and structure of a language, and machine-learning models, which “automatically extract, classify, and label elements of text and voice data and then assign a statistical likelihood to each possible meaning of those elements,” according to IBM. NLP underpins the functionality of every digital assistant on the market. Basically any time you’re speaking at a “smart” device, NLP is translating your words into machine-understandable signals and vice versa.
The field of NLP research has undergone a significant evolution in recent years, as its core systems have migrated from older Recurrent and Convoluted Neural Networks towards Google’s Transformer architecture, which greatly increases training efficiency.
Dr. Noah D. Goodman, Associate Professor of Psychology and Computer Science, and Linguistics at Stanford University, told Engadget that, with RNNs, “you'll have to go time-step by time-step or like word by word through the data and then do the same thing backward.” In contrast, with a transformer, “you basically take the whole string of words and push them through the network at the same time.”
“It really matters to make that training more efficient,” Dr. Goodman continued. “Transformers, they're cool … but by far the biggest thing is that they make it possible to train efficiently and therefore train much bigger models on much more data.”
Dr. Jeffrey Lucas, professor in the Biological Sciences department at Purdue University, told Engadget that the Paridae call “is one of the most complicated vocal systems that we know of. At the end of the day, what the [field’s voluminous number of research] papers are showing is that it's god-awfully complicated, and the problem with the papers is that they grossly under-interpret how complicated [the calls] actually are.”
These parids often live in socially complex, heterospecific flocks, mixed groupings that include multiple songbird and woodpecker species. The complexity of the birds’ social system is correlated with an increased diversity in communications systems, Dr. Lucas said. “Part of the reason why that correlation exists is because, if you have a complex social system that's multi-dimensional, then you have to convey a variety of different kinds of information across different contexts. In the bird world, they have to defend their territory, talk about food, integrate into the social system [and resolve] mating issues.”
The chickadee call consist of at least six distinct notes set in an open-ended vocal structure, which is both monumentally rare in non-human communication systems and the reason for the Chickadee’s call complexity. An open-ended vocal system means that “increased recording of chick-a-dee calls will continually reveal calls with distinct note-type compositions,” explained the 2012 study, Linking social complexity and vocal complexity: a parid perspective. “This open-ended nature is one of the main features the chick-a-dee call shares with human language, and one of the main differences between the chick-a-dee call and the finite song repertoires of most songbird species.”
Dolphins have no need for kings
Training language models isn’t simply a matter of shoving in large amounts of data. When training a model to translate an unknown language into what you’re speaking, you need to have at least a rudimentary understanding of how the the two languages correlate with one another so that the translated text retains the proper intent of the speaker.
“The strongest kind of data that we could have is what's called a parallel corpus,” Dr. Goodman explained, which is basically having a Rosetta Stone for the two tongues. In that case, you’d simply have to map between specific words, symbols and phonemes in each language — figure out what means “river” or “one bushel of wheat” in each and build out from there.
Without that perfect translation artifact, so long as you have large corpuses of data for both languages, “it's still possible to learn a translation between the languages, but it hinges pretty crucially on the idea that the kind of latent conceptual structure,” Dr. Goodman continued, which assumes that both culture’s definitions of “one bushel of wheat” are generally equivalent.
Goodman points to the word pairs ’man and woman’ and ’king and queen’ in English. “The structure, or geometry, of that relationship we expect English, if we were translating into Hungarian, we would also expect those four concepts to stand in a similar relationship,” Dr. Goodman said. “Then effectively the way we'll learn a translation now is by learning to translate in a way that preserves the structure of that conceptual space as much as possible.”
Having a large corpus of data to work with in this situation also enables unsupervised learning techniques to be used to “extract the latent conceptual space,” Dr. Goodman said, though that method is more resource intensive and less efficient. However, if all you have is a large corpus in only one of the languages, you’re generally out of luck.
“For most human languages we assume the [quartet concepts] are kind of, sort of similar, like, maybe they don't have ‘king and queen’ but they definitely have ‘man and woman,’” Dr. Goodman continued. ”But I think for animal communication, we can't assume that dolphins have a concept of ‘king and queen’ or whether they have ‘men and women.’ I don't know, maybe, maybe not.”
And without even that rudimentary conceptual alignment to work from, discerning the context and intent of a animal’s call — much less, deciphering the syntax, grammar and semantics of the underlying communication system — becomes much more difficult. “You're in a much weaker position,” Dr. Goodman said. “If you have the utterances in the world context that they're uttered in, then you might be able to get somewhere.”
Basically, if you can obtain multimodal data that provides context for the recorded animal call — the environmental conditions, time of day or year, the presence of prey or predator species, etc — you can “ground” the language data into the physical environment. From there you can “assume that English grounds into the physical environment in the same way as this weird new language grounds into the physical environment’ and use that as a kind of bridge between the languages.”
Unfortunately, the challenge of translating bird calls into English (or any other human language) is going to fall squarely into the fourth category. This means we’ll need more data and a lot of different types of data as we continue to build our basic understanding of the structures of these calls from the ground up. Some of those efforts are already underway.
The Dolphin Communication Project, for example, employs a combination “mobile video/acoustic system” to capture both the utterances of wild dolphins and their relative position in physical space at that time to give researchers added context to the calls. Biologging tags — animal-borne sensors affixed to hide, hair, or horn that track the locations and conditions of their hosts — continue to shrink in size while growing in both capacity and capability, which should help researchers gather even more data about these communities.
What if birds are just constantly screaming about the heat?
Even if we won’t be able to immediately chat with our furred and feathered neighbors, gaining a better understanding of how they at least talk to each other could prove valuable to conservation efforts. Dr. Lucas points to a recent study he participated in that found environmental changes induced by climate change can radically change how different bird species interact in mixed flocks. “What we showed was that if you look across the disturbance gradients, then everything changes,” Dr. Lucas said. “What they do with space changes, how they interact with other birds changes. Their vocal systems change.”
“The social interactions for birds in winter are extraordinarily important because you know, 10 gram bird — if it doesn't eat in a day, it's dead,” Dr. Lucas continued. “So information about their environment is extraordinarily important. And what those mixed species flocks do is to provide some of that information.”
However that network quickly breaks down as the habitat degrades and in order to survive “they have to really go through fairly extreme changes in behavior and social systems and vocal systems … but that impacts fertility rates, and their ability to feed their kids and that sort of thing.”
Better understanding their calls will help us better understand their levels of stress, which can serve both modern conservation efforts and agricultural ends. “The idea is that we can get an idea about the level of stress in [farm animals], then use that as an index of what's happening in the barn and whether we can maybe even mitigate that using vocalizations,” Dr. Lucas said. “AI probably is going to help us do this.”
“Scientific sources indicate that noise in farm animal environments is a detrimental factor to animal health,” Jan Brouček of the Research Institute for Animal Production Nitra, observed in 2014. “Especially longer lasting sounds can affect the health of animals. Noise directly affects reproductive physiology or energy consumption.” That continuous drone is thought to also indirectly impact other behaviors including habitat use, courtship, mating, reproduction and the care of offspring.
This article originally appeared on Engadget at https://www.engadget.com/why-humans-cant-use-natural-language-processing-to-speak-with-the-animals-143050169.html?src=rss
Researchers at Stanford Medicine have made a promising discovery that could lead to new cancer treatments in the future. Scientists conducted tests in which they altered the genomes of skin-based microbes and bacteria to fight cancer. These altered microbes were swabbed onto cancer-stricken mice and, lo and behold, tumors began to dissipate.
The bacteria in question, Staphylococcus epidermidis, was grabbed from the fur of mice and altered to produce a protein that stimulates the immune system with regard to specific tumors. The experiment seemed to be a resounding success, with the modified bacteria killing aggressive types of metastatic skin cancer after being gently applied to the fur. The results were also achieved without any noticeable inflammation.
“It seemed almost like magic,” said Michael Fischbach, PhD, an associate professor of bioengineering at Stanford. “These mice had very aggressive tumors growing on their flank, and we gave them a gentle treatment where we simply took a swab of bacteria and rubbed it on the fur of their heads.”
This is yet another foray into the misunderstood world of microbiomes and all of the bacteria that reside there. Gut biomes get all of the press these days, but the skin also plays host to millions upon millions of bacteria, fungi and viruses, and the purpose of these entities is often unknown.
In this instance, scientists found that staph epidermidis cells trigger the production of immune cells called CD8 T cells. The researchers basically hijacked the S. epidermidis into producing CD8 T cells that target specific antigens. In this case, the antigens were related to skin cancer tumors. When the cells encountered a matching tumor, they began to rapidly reproduce and shrink the mass, or extinguish it entirely.
“Watching those tumors disappear — especially at a site distant from where we applied the bacteria — was shocking,” Fischbach said. “It took us a while to believe it was happening.”
As with all burgeoning cancer treatments, there are some heavy caveats. First of all, these experiments are being conducted on mice. Humans and mice are biologically similar in many respects, but a great many treatments that work on mice are a dud with people. Stanford researchers have no idea if S. epidermidis triggers an immune response in humans, though our skin is littered with the stuff, so they may need to find a different microbe to alter. Also, this treatment is designed to treat skin cancer tumors and is applied topically. It remains to be seen if the benefits carry over to internal cancers.
With that said, the Stanford team says they expect human trials to start within the next few years, though more testing is needed on both mice and other animals before going ahead with people. Scientists hope that this treatment could eventually be pointed at all kinds of infectious diseases, in addition to cancer cells.
This article originally appeared on Engadget at https://www.engadget.com/scientists-have-successfully-engineered-bacteria-to-fight-cancer-in-mice-165141857.html?src=rss
Researchers at Stanford Medicine have made a promising discovery that could lead to new cancer treatments in the future. Scientists conducted tests in which they altered the genomes of skin-based microbes and bacteria to fight cancer. These altered microbes were swabbed onto cancer-stricken mice and, lo and behold, tumors began to dissipate.
The bacteria in question, Staphylococcus epidermidis, was grabbed from the fur of mice and altered to produce a protein that stimulates the immune system with regard to specific tumors. The experiment seemed to be a resounding success, with the modified bacteria killing aggressive types of metastatic skin cancer after being gently applied to the fur. The results were also achieved without any noticeable inflammation.
“It seemed almost like magic,” said Michael Fischbach, PhD, an associate professor of bioengineering at Stanford. “These mice had very aggressive tumors growing on their flank, and we gave them a gentle treatment where we simply took a swab of bacteria and rubbed it on the fur of their heads.”
This is yet another foray into the misunderstood world of microbiomes and all of the bacteria that reside there. Gut biomes get all of the press these days, but the skin also plays host to millions upon millions of bacteria, fungi and viruses, and the purpose of these entities is often unknown.
In this instance, scientists found that staph epidermidis cells trigger the production of immune cells called CD8 T cells. The researchers basically hijacked the S. epidermidis into producing CD8 T cells that target specific antigens. In this case, the antigens were related to skin cancer tumors. When the cells encountered a matching tumor, they began to rapidly reproduce and shrink the mass, or extinguish it entirely.
“Watching those tumors disappear — especially at a site distant from where we applied the bacteria — was shocking,” Fischbach said. “It took us a while to believe it was happening.”
As with all burgeoning cancer treatments, there are some heavy caveats. First of all, these experiments are being conducted on mice. Humans and mice are biologically similar in many respects, but a great many treatments that work on mice are a dud with people. Stanford researchers have no idea if S. epidermidis triggers an immune response in humans, though our skin is littered with the stuff, so they may need to find a different microbe to alter. Also, this treatment is designed to treat skin cancer tumors and is applied topically. It remains to be seen if the benefits carry over to internal cancers.
With that said, the Stanford team says they expect human trials to start within the next few years, though more testing is needed on both mice and other animals before going ahead with people. Scientists hope that this treatment could eventually be pointed at all kinds of infectious diseases, in addition to cancer cells.
This article originally appeared on Engadget at https://www.engadget.com/scientists-have-successfully-engineered-bacteria-to-fight-cancer-in-mice-165141857.html?src=rss
Researchers understand the structure of brains and have mapped them out in some detail, but they still don't know exactly how they process data — for that, a detailed "circuit map" of the brain is needed.
Now, scientists have created just such a map for the most advanced creature yet: a fruit fly larva. Called a connectome, it diagrams the insect's 3016 neurons and 548,000 synapses, Neuroscience News has reported. The map will help researchers study better understand how the brains of both insects and animals control behavior, learning, body functions and more. The work may even inspired improved AI networks.
"Up until this point, we’ve not seen the structure of any brain except of the roundworm C. elegans, the tadpole of a low chordate, and the larva of a marine annelid, all of which have several hundred neurons," said professor Marta Zlatic from the MRC Laboratory of Molecular Biology. "This means neuroscience has been mostly operating without circuit maps. Without knowing the structure of a brain, we’re guessing on the way computations are implemented. But now, we can start gaining a mechanistic understanding of how the brain works."
To build the map, the team scanned thousands of slices from the larva's brain with an electron microscope, then integrated those into a detailed map, annotating all the neural connections. From there, they used computational tools to identify likely information flow pathways and types of "circuit motifs" in the insect's brain. They even noticed that some structural features closely resembled state-of-the-art deep learning architecture.
Scientists have made detailed maps of the brain of a fruit fly, which is far more complex than a fruit fly larva. However, these maps don't include all the detailed connections required to have a true circuit map of their brains.
As a next step, the team will investigate the structures used for behavioural functions like learning and decision making, and examine connectome activity while the insect does specific activities. And while a fruit fly larva is a simple insect, the researchers expect to see similar patterns in other animals. "In the same way that genes are conserved across the animal kingdom, I think that the basic circuit motifs that implement these fundamental behaviours will also be conserved," said Zlatic.
This article originally appeared on Engadget at https://www.engadget.com/scientists-create-the-most-complex-map-yet-of-an-insect-brains-wiring-085600210.html?src=rss