Steam now allows you to copy games to Steam Deck and other PCs over a local network

Valve is giving Steam Deck users with slow internet connections or bandwidth caps a new way to install games on their devices. The latest Steam and Steam Deck betas add local network game transfers, a feature that allows you to copy existing files from one PC to another over a local area network. Valve says the tool can reduce internet traffic and lessen the time it takes to install games and updates since you can use it to bypass the need to connect to a Steam content server over the internet.

“Local Network Game Transfers are great for Steam Deck owners, multi-user Steam households, dorms, LAN parties, etc,” the company points out. “No more worries about bandwidth or data caps when all the files you need are already nearby.” Once you’ve installed the new software on your devices, Steam will first check if it can transfer a game installation or set of update files over your local network before contacting a public Steam content server. If at any point one of the devices involved in the transfer is disconnected from your local network, Steam will fall back to downloading any necessary files from the internet.

By default, the feature is set to only work between devices logged into the same Steam account, but you can also transfer files between friends on the same local area network. It’s also possible to transfer to any user on the same network, which is something you would do during a LAN tournament. Valve has published a FAQ with more information about local network game transfers, including details on some of the limitations of the feature, over on the Steam website.

Steam now allows you to copy games to Steam Deck and other PCs over a local network

Valve is giving Steam Deck users with slow internet connections or bandwidth caps a new way to install games on their devices. The latest Steam and Steam Deck betas add local network game transfers, a feature that allows you to copy existing files from one PC to another over a local area network. Valve says the tool can reduce internet traffic and lessen the time it takes to install games and updates since you can use it to bypass the need to connect to a Steam content server over the internet.

“Local Network Game Transfers are great for Steam Deck owners, multi-user Steam households, dorms, LAN parties, etc,” the company points out. “No more worries about bandwidth or data caps when all the files you need are already nearby.” Once you’ve installed the new software on your devices, Steam will first check if it can transfer a game installation or set of update files over your local network before contacting a public Steam content server. If at any point one of the devices involved in the transfer is disconnected from your local network, Steam will fall back to downloading any necessary files from the internet.

By default, the feature is set to only work between devices logged into the same Steam account, but you can also transfer files between friends on the same local area network. It’s also possible to transfer to any user on the same network, which is something you would do during a LAN tournament. Valve has published a FAQ with more information about local network game transfers, including details on some of the limitations of the feature, over on the Steam website.

How AI will change the way we search, for better or worse

Great news everyone, we’re pivoting to chatbots! Little did OpenAI realize when it released ChatGPT last November that the advanced LLM (large language model) designed to uncannily mimic human writing would become the fastest growing app to date with more than 100 million users signing up over the past three months. Its success — helped along by a $10 billion, multi-year investment from Microsoft — largely caught the company’s competition flat-footed, in turn spurring a frenetic and frantic response from Google, Baidu and Alibaba. But as these enhanced search engines come online in the coming days, the ways and whys of how we search are sure to evolve alongside them.

“I'm pretty excited about the technology. You know, we've been building NLP systems for a while and we've been looking every year at incremental growth,” Dr. Sameer Singh, Associate Professor of Computer Science at the University of California, Irvine (UCI), told Engadget. “For the public, it seems like suddenly out of the blue, that's where we are. I've seen things getting better over the years and it's good for all of this stuff to be available everywhere and for people to be using it.”

As to the recent public success of large language models, “I think it's partly that technology has gotten to a place where it's not completely embarrassing to put the output of these models in front of people — and it does look really good most of the time,” Singh continued. “I think that that’s good enough.”

Microsoft Bing search engine in pictured on a monitor in the Bing Experience Lounge during an event introducing a new AI-powered Microsoft Bing and Edge at Microsoft in Redmond, Washington on February 7, 2023. - Microsoft's long-struggling Bing search engine will integrate the powerful capabilities of language-based artificial intelligence, CEO Satya Nadella said, declaring what he called a new era for online search. (Photo by Jason Redmond / AFP) (Photo by JASON REDMOND/AFP via Getty Images)
JASON REDMOND via Getty Images

“I think it has less to do with technology but more to do with the public perception,” he continued. “If GPT hadn't been released publicly… Once something like that is out there and it's really resonating with so many people, the usage is off the charts.”

Search providers have big, big ideas for how the artificial intelligence-enhanced web crawlers and search engines might work and damned if they aren’t going to break stuff and move fast to get there. Microsoft envisions its Bing AI to serve as the user’s “copilot” in their web browsing, following them from page to page answering questions and even writing social media posts on their behalf.

This is a fundamental change from the process we use today. Depending on the complexity of the question users may have to visit multiple websites, then sift through that collected information and stitch it together into a cohesive idea before evaluating it.

“That's more work than having a model that hopefully has read these pages already and can synthesize this into something that doesn't currently exist on the web,” Brendan Dolan-Gavitt, Assistant Professor in the Computer Science and Engineering Department at NYU Tandon, told Engadget. “The information is still out there. It's still verifiable, and hopefully correct. But it's not all in place.”

For its part, Google’s vision of the AI-powered future has users hanging around its search page rather than clicking through to destination sites. Information relevant to the user’s query would be collected from the web, stitched together by the language model, then regurgitated as an answer with reference to the originating website displayed as footnotes.

This all sounds great, and was all going great, right up to the very first opportunity for something to go wrong. When it did. In its inaugural Twitter ad — less than 24 hours after debuting — Bard, Google’s answer to ChatGPT, confidently declared, “JWST took the very first pictures of a planet outside of our own solar system.” You will be shocked to learn that the James Webb Space Telescope did not, in fact, discover the first exoplanet in history. The ESO’s Very Large Telescope holds that honor from 2004. Bard just sorta made it up. Hallucinated it out of the digital ether.

Of course this isn’t the first time that we’ve been lied to by machines. Search has always been a bit of a crapshoot, ever since the early days of Lycos and Altavista. “When search was released, we thought it was ‘good enough’ though it wasn't perfect,” Singh recalled. “It would give all kinds of results. Over time, those have improved a lot. We played with it, and we realized when we should trust it and when we shouldn’t — when we should go to the second page of results, and when we shouldn't.”

The subsequent generation of voice AI assistants evolved through the same base issues that their text-based predecessors did. “When Siri and Google Assistant and all of these came out and Alexa,” Singh said, “they were not the assistants that they were being sold to us as.”

The performance of today’s LLMs like Bard and ChatGPT, are likely to improve along similar paths through their public use, as well as through further specialization into specific technical and knowledge-based roles such as medicine, business analysis and law. “I think there are definitely reasons it becomes much better once you start specializing it. I don't think Google and Microsoft specifically are going to be specializing it too much — their market is as general as possible,” Singh noted.

In many ways, what Google and Bing are offering by interposing their services in front of the wider internet — much as AOL did with the America Online service in the ‘90s — is a logical conclusion to the challenges facing today’s internet users.

REDMOND, WA. - FEBRUARY 7: Washington Post reporter Geoff Fowler asks a Microsoft spokesperson questions about the new, AI-powered Bing search during a demo at the Microsoft headquarters in Redmond, Wash., on Tuesday, Feb. 7, 2023. (Photo by Jovelle Tamayo/ forThe Washington Post via Getty Images)
The Washington Post via Getty Images

“Nobody's doing the search as the end goal. We are seeking some information, eventually to act on that information,” Singh argues. “If we think about that as the role of search, and not just search in the literal sense of literally searching for something, you can imagine something that actually acts on top of search results can be very useful.”

Singh characterizes this centralization of power as, “a very valid concern. Simply put, if you have these chat capabilities, you are much less inclined to actually go to the websites where this information resides,” he said.

It’s bad enough that chatbots have a habit of making broad intellectual leaps in their summarizations, but the practice may also “incentivize users not go to the website, not read the whole source, to just get the version that the chat interface gives you and sort of start relying on it more and more,” Singh warned.

In this, Singh and Dolan-Gavitt agree. “If you’re cannibalizing from the visits that a site would have gotten, and are no longer directing people there, but using the same information, there's an argument that these sites won't have much incentive to keep posting new content.” Dolan-Gavitt told Engadget. “On the other hand the need for clicks also is one of the reasons we get lots of spam and is one of the reasons why search has sort of become less useful recently. I think [the shortcomings of search are] a big part of why people are responding more positively to these chatbot products.”

That demand, combined with a nascent marketplace, is resulting in a scramble among the industry’s major players to get their products out yesterday, ready or not, underwhelming or not. That rush for market share is decidedly hazardous for consumers. Microsoft’s previous foray into AI chatbots, 2014’s Taye, ended poorly (to put it without the white hoods and goose stepping). Today, Redditors are already jailbreaking OpenAI to generate racist content. These are two of the more innocuous challenges we will face as LLMs expand in use but even they have proven difficult to stamp out in part, because they require coordination amongst an industry of viscous competitors.

“The kinds of things that I tend to worry about are, on the software side, whether this puts malicious capabilities into more hands, makes it easier for people to write malware and viruses,” Dolan-Gavitt said. “This is not as extreme as things like misinformation but certainly, I think it'll make it a lot easier for people to make spam.”

“A lot of the thinking around safety so far has been predicated on this idea that there would be just a couple kinds of central companies that, if you could get them all to agree, we could have some safety standards.” Dolan-Gavitt continued. “I think the more competition there is, the more you get this open environment where you can download an unrestricted model, set it up on your server and have it generate whatever you want. The kinds of approaches that relied on this more centralized model will start to fall apart.”

How AI will change the way we search, for better or worse

Great news everyone, we’re pivoting to chatbots! Little did OpenAI realize when it released ChatGPT last November that the advanced LLM (large language model) designed to uncannily mimic human writing would become the fastest growing app to date with more than 100 million users signing up over the past three months. Its success — helped along by a $10 billion, multi-year investment from Microsoft — largely caught the company’s competition flat-footed, in turn spurring a frenetic and frantic response from Google, Baidu and Alibaba. But as these enhanced search engines come online in the coming days, the ways and whys of how we search are sure to evolve alongside them.

“I'm pretty excited about the technology. You know, we've been building NLP systems for a while and we've been looking every year at incremental growth,” Dr. Sameer Singh, Associate Professor of Computer Science at the University of California, Irvine (UCI), told Engadget. “For the public, it seems like suddenly out of the blue, that's where we are. I've seen things getting better over the years and it's good for all of this stuff to be available everywhere and for people to be using it.”

As to the recent public success of large language models, “I think it's partly that technology has gotten to a place where it's not completely embarrassing to put the output of these models in front of people — and it does look really good most of the time,” Singh continued. “I think that that’s good enough.”

Microsoft Bing search engine in pictured on a monitor in the Bing Experience Lounge during an event introducing a new AI-powered Microsoft Bing and Edge at Microsoft in Redmond, Washington on February 7, 2023. - Microsoft's long-struggling Bing search engine will integrate the powerful capabilities of language-based artificial intelligence, CEO Satya Nadella said, declaring what he called a new era for online search. (Photo by Jason Redmond / AFP) (Photo by JASON REDMOND/AFP via Getty Images)
JASON REDMOND via Getty Images

“I think it has less to do with technology but more to do with the public perception,” he continued. “If GPT hadn't been released publicly… Once something like that is out there and it's really resonating with so many people, the usage is off the charts.”

Search providers have big, big ideas for how the artificial intelligence-enhanced web crawlers and search engines might work and damned if they aren’t going to break stuff and move fast to get there. Microsoft envisions its Bing AI to serve as the user’s “copilot” in their web browsing, following them from page to page answering questions and even writing social media posts on their behalf.

This is a fundamental change from the process we use today. Depending on the complexity of the question users may have to visit multiple websites, then sift through that collected information and stitch it together into a cohesive idea before evaluating it.

“That's more work than having a model that hopefully has read these pages already and can synthesize this into something that doesn't currently exist on the web,” Brendan Dolan-Gavitt, Assistant Professor in the Computer Science and Engineering Department at NYU Tandon, told Engadget. “The information is still out there. It's still verifiable, and hopefully correct. But it's not all in place.”

For its part, Google’s vision of the AI-powered future has users hanging around its search page rather than clicking through to destination sites. Information relevant to the user’s query would be collected from the web, stitched together by the language model, then regurgitated as an answer with reference to the originating website displayed as footnotes.

This all sounds great, and was all going great, right up to the very first opportunity for something to go wrong. When it did. In its inaugural Twitter ad — less than 24 hours after debuting — Bard, Google’s answer to ChatGPT, confidently declared, “JWST took the very first pictures of a planet outside of our own solar system.” You will be shocked to learn that the James Webb Space Telescope did not, in fact, discover the first exoplanet in history. The ESO’s Very Large Telescope holds that honor from 2004. Bard just sorta made it up. Hallucinated it out of the digital ether.

Of course this isn’t the first time that we’ve been lied to by machines. Search has always been a bit of a crapshoot, ever since the early days of Lycos and Altavista. “When search was released, we thought it was ‘good enough’ though it wasn't perfect,” Singh recalled. “It would give all kinds of results. Over time, those have improved a lot. We played with it, and we realized when we should trust it and when we shouldn’t — when we should go to the second page of results, and when we shouldn't.”

The subsequent generation of voice AI assistants evolved through the same base issues that their text-based predecessors did. “When Siri and Google Assistant and all of these came out and Alexa,” Singh said, “they were not the assistants that they were being sold to us as.”

The performance of today’s LLMs like Bard and ChatGPT, are likely to improve along similar paths through their public use, as well as through further specialization into specific technical and knowledge-based roles such as medicine, business analysis and law. “I think there are definitely reasons it becomes much better once you start specializing it. I don't think Google and Microsoft specifically are going to be specializing it too much — their market is as general as possible,” Singh noted.

In many ways, what Google and Bing are offering by interposing their services in front of the wider internet — much as AOL did with the America Online service in the ‘90s — is a logical conclusion to the challenges facing today’s internet users.

REDMOND, WA. - FEBRUARY 7: Washington Post reporter Geoff Fowler asks a Microsoft spokesperson questions about the new, AI-powered Bing search during a demo at the Microsoft headquarters in Redmond, Wash., on Tuesday, Feb. 7, 2023. (Photo by Jovelle Tamayo/ forThe Washington Post via Getty Images)
The Washington Post via Getty Images

“Nobody's doing the search as the end goal. We are seeking some information, eventually to act on that information,” Singh argues. “If we think about that as the role of search, and not just search in the literal sense of literally searching for something, you can imagine something that actually acts on top of search results can be very useful.”

Singh characterizes this centralization of power as, “a very valid concern. Simply put, if you have these chat capabilities, you are much less inclined to actually go to the websites where this information resides,” he said.

It’s bad enough that chatbots have a habit of making broad intellectual leaps in their summarizations, but the practice may also “incentivize users not go to the website, not read the whole source, to just get the version that the chat interface gives you and sort of start relying on it more and more,” Singh warned.

In this, Singh and Dolan-Gavitt agree. “If you’re cannibalizing from the visits that a site would have gotten, and are no longer directing people there, but using the same information, there's an argument that these sites won't have much incentive to keep posting new content.” Dolan-Gavitt told Engadget. “On the other hand the need for clicks also is one of the reasons we get lots of spam and is one of the reasons why search has sort of become less useful recently. I think [the shortcomings of search are] a big part of why people are responding more positively to these chatbot products.”

That demand, combined with a nascent marketplace, is resulting in a scramble among the industry’s major players to get their products out yesterday, ready or not, underwhelming or not. That rush for market share is decidedly hazardous for consumers. Microsoft’s previous foray into AI chatbots, 2014’s Taye, ended poorly (to put it without the white hoods and goose stepping). Today, Redditors are already jailbreaking OpenAI to generate racist content. These are two of the more innocuous challenges we will face as LLMs expand in use but even they have proven difficult to stamp out in part, because they require coordination amongst an industry of viscous competitors.

“The kinds of things that I tend to worry about are, on the software side, whether this puts malicious capabilities into more hands, makes it easier for people to write malware and viruses,” Dolan-Gavitt said. “This is not as extreme as things like misinformation but certainly, I think it'll make it a lot easier for people to make spam.”

“A lot of the thinking around safety so far has been predicated on this idea that there would be just a couple kinds of central companies that, if you could get them all to agree, we could have some safety standards.” Dolan-Gavitt continued. “I think the more competition there is, the more you get this open environment where you can download an unrestricted model, set it up on your server and have it generate whatever you want. The kinds of approaches that relied on this more centralized model will start to fall apart.”

Google Fi warns customers that their data has been compromised

Google has notified customers of its Fi mobile virtual network operator (MVNO) service that hackers were able to access some of their information, according to TechCrunch. The tech giant said the bad actors infiltrated a third-party system used for customer support at Fi's primary network provider. While Google didn't name the provider outright, Fi relies on US Cellular and T-Mobile for connectivity. If you'll recall, the latter admitted in mid-January that hackers had been taking data from its systems since November last year.

T-Mobile said the attackers got away with the information of around 37 million postpaid and prepaid customers before it discovered and contained the issue. Back then, the carrier insisted that no passwords, payment information and social security numbers were stolen. Google Fi is saying the same thing, adding that no PINs or text message/call contents were taken, as well. The hackers only apparently had access to users' phone numbers, account status, SMS card serial numbers and some service plan information, like international roaming. 

Google reportedly told most users that they didn't have to do anything and that it's still working with Fi's network provider to "identify and implement measures to secure the data on that third-party system and notify everyone potentially impacted." That said, at least one customer claimed having more serious issues than most because of the breach. They shared a part of Google's supposed email to them on Reddit, telling them that that their "mobile phone service was transferred from [their] SIM card to another SIM card" for almost two hours on January 1st. 

The customer said they received password reset notifications from Outlook, their crypto wallet account and two-factor authenticator Authy that day. They sent logs to 9to5Google to prove that the attackers had used their number to receive text messages that allowed them to access those accounts. Based on their Fi text history, the bad actors started resetting passwords and requesting two-factor authentication codes via SMS within one minute of transferring their SIM card. The customer was reportedly only able regain control of their accounts after turning network access on their iPhone off and back on, though it's unclear if that's what solved the issue. We've reached out to Google for a statement regarding the customers' SIM swapping claim and will update this post when we hear back. 

Starlink is adding a 1TB data cap for usage during peak hours

Starlink raised its prices this spring, and now it's increasing the costs for its most demanding users. As The Vergereports, the SpaceX-run satellite internet provider is instituting a 1TB "Priority Access" monthly cap for data use between 7AM and 11PM beginning in December. Cross that limit and you'll spend the rest of the month relegated to "Basic Access" that, like with some phone carriers, deprioritizes your data when the network is busy. You might not notice much of a difference in typical situations, but this won't thrill you if you depend on sustained performance.

Service can get expensive if you insist on full performance around the clock. You'll pay 25 cents per gigabyte of priority data. As Reddit user Nibbloid pointed out, the math doesn't quite add up. It will cost you another $250 to get an extra 1TB of data — it would be cheaper to add a second subscription, at least if you don't mind the cost of an extra terminal. RV, Portability and "Best Effort" users also don't have any Priority Access.

Other users face tougher restrictions. Fixed business service has peak-hour caps ranging from 500GB to 3TB, with extra full-speed data costing $1 per gigabyte. Mobility users have no Priority Access for recreational use, while commercial and Premium/Maritime users have respective 1TB and 5TB caps. Those higher-end users will pay $2 for every gigabyte of priority data they need.

The justifications will sound familiar if you've dealt with data caps from Comcast and other land-based internet providers. Starlink maintains that it has to balance supply with demand to provide fast service to the "greatest number of people." This is ostensibly to keep usage in check on a "finite resource."

The decision to cap users comes as SpaceX has called for government help to fund Starlink service in Ukraine at a claimed cost of nearly $400 million per year. While Musk has said SpaceX will continue to pay regardless of assistance, it's clear the company is worried about expenses as demand increases.

Starlink is adding a 1TB data cap for usage during peak hours

Starlink raised its prices this spring, and now it's increasing the costs for its most demanding users. As The Vergereports, the SpaceX-run satellite internet provider is instituting a 1TB "Priority Access" monthly cap for data use between 7AM and 11PM beginning in December. Cross that limit and you'll spend the rest of the month relegated to "Basic Access" that, like with some phone carriers, deprioritizes your data when the network is busy. You might not notice much of a difference in typical situations, but this won't thrill you if you depend on sustained performance.

Service can get expensive if you insist on full performance around the clock. You'll pay 25 cents per gigabyte of priority data. As Reddit user Nibbloid pointed out, the math doesn't quite add up. It will cost you another $250 to get an extra 1TB of data — it would be cheaper to add a second subscription, at least if you don't mind the cost of an extra terminal. RV, Portability and "Best Effort" users also don't have any Priority Access.

Other users face tougher restrictions. Fixed business service has peak-hour caps ranging from 500GB to 3TB, with extra full-speed data costing $1 per gigabyte. Mobility users have no Priority Access for recreational use, while commercial and Premium/Maritime users have respective 1TB and 5TB caps. Those higher-end users will pay $2 for every gigabyte of priority data they need.

The justifications will sound familiar if you've dealt with data caps from Comcast and other land-based internet providers. Starlink maintains that it has to balance supply with demand to provide fast service to the "greatest number of people." This is ostensibly to keep usage in check on a "finite resource."

The decision to cap users comes as SpaceX has called for government help to fund Starlink service in Ukraine at a claimed cost of nearly $400 million per year. While Musk has said SpaceX will continue to pay regardless of assistance, it's clear the company is worried about expenses as demand increases.

Google Fiber will offer 5Gbps and 8Gbps internet plans in early 2023

Google Fiber's sudden revival will include a dramatic boost to internet speeds. Google has revealed that it will offer 5Gbps and 8Gbps plans in early 2023 at respective monthly rates of $125 and $150. Both tiers will include symmetric upload and download rates, a WiFi 6 router and up to two mesh network extenders. The upgrades should help with massive file transfers while keeping lag and jittering to a bare minimum, according to the company.

Current customers, particularly in Kansas City, Utah and West Des Moines, can try the speedier plans as soon as November if they sign up to become "trusted testers." If you're eligible, Google will ask you how you expect to use the extra bandwidth.

This is a big jump from the previous-best 2Gbps service Google introduced in 2020, and could make a big difference if you're a gamer or thrive on cloud computing. If a 150GB Microsoft Flight Simulator download takes 11 minutes at 2Gbps, the 8Gbps plan could cut that wait to less than three minutes in ideal conditions. It certainly makes typical cable internet plans seem expensive. Comcast is already offering 6Gbps service in some areas, for instance, but that costs $300 per month on contract and doesn't yet include symmetric uploads.

Either way, the new plans represent a declaration of intent. Alongside the first network expansions in five years, the upgraded speeds suggest Google is getting back to Fiber's roots. That is, it's both raising expectations for truly fast internet access and (to a degree) spurring competition among incumbent providers. This could help Google pitch its other services, of course, but you might not mind if it gives telecoms an extra incentive to roll out '10G' and similar upgrades sooner than they might have otherwise.

Google Fiber will offer 5Gbps and 8Gbps internet plans in early 2023

Google Fiber's sudden revival will include a dramatic boost to internet speeds. Google has revealed that it will offer 5Gbps and 8Gbps plans in early 2023 at respective monthly rates of $125 and $150. Both tiers will include symmetric upload and download rates, a WiFi 6 router and up to two mesh network extenders. The upgrades should help with massive file transfers while keeping lag and jittering to a bare minimum, according to the company.

Current customers, particularly in Kansas City, Utah and West Des Moines, can try the speedier plans as soon as November if they sign up to become "trusted testers." If you're eligible, Google will ask you how you expect to use the extra bandwidth.

This is a big jump from the previous-best 2Gbps service Google introduced in 2020, and could make a big difference if you're a gamer or thrive on cloud computing. If a 150GB Microsoft Flight Simulator download takes 11 minutes at 2Gbps, the 8Gbps plan could cut that wait to less than three minutes in ideal conditions. It certainly makes typical cable internet plans seem expensive. Comcast is already offering 6Gbps service in some areas, for instance, but that costs $300 per month on contract and doesn't yet include symmetric uploads.

Either way, the new plans represent a declaration of intent. Alongside the first network expansions in five years, the upgraded speeds suggest Google is getting back to Fiber's roots. That is, it's both raising expectations for truly fast internet access and (to a degree) spurring competition among incumbent providers. This could help Google pitch its other services, of course, but you might not mind if it gives telecoms an extra incentive to roll out '10G' and similar upgrades sooner than they might have otherwise.

SpaceX wants to put Starlink internet on rural school buses

Starlink satellite internet access has already spread to boats and RVs, and now it might accompany your child on the way home from class. SpaceX told the FCC in a filing that it's piloting Starlink aboard school buses in the rural US. The project would keep students connected during lengthy rides (over an hour in the pilot), ensuring they can complete internet-related homework in a timely fashion even if broadband is slow or non-existent at home.

The spaceflight company simultaneously backed FCC chair Jessica Rosenworcel's May proposal to bring WiFi to school buses, and said it supported the regulator's efforts to fund school and library internet access through the E-Rate program. To no one's surprise, SpaceX felt it had the best solution thanks to rapid satellite deployment, portable dishes and fast service for the "most remote" areas.

We've asked the FCC and SpaceX for comment, and will let you know if they respond. The pitch comes just two months after the FCC cleared the use of Starlink in vehicles, noting that it would serve the "public interest" to keep people online while on the move. The concept isn't new — Google outfitted school buses with WiFi in 2018 following tests, for example.

There's no guarantee the FCC will embrace SpaceX and fund bus-based Starlink service. The Commission rejected SpaceX's request for $885.5 million in help through the Rural Digital Opportunity Fund, and the firm responded by blasting the rejection as "grossly unfair" and allegedly unsupported by evidence. Satellite internet service theoretically offers more consistent rural coverage than cellular data, though, and Starlink competitors like Amazon's Project Kuiper have yet to deploy in earnest.