Feed aggregator

BitTorrent Is No Longer the 'King' of Upstream Internet Traffic

Slashdot.org - 2 hours 6 min ago
An anonymous reader quotes a report from TorrentFreak: Back in 2004, in the pre-Web 2.0 era, research indicated that BitTorrent was responsible for an impressive 35% of all Internet traffic. At the time, file-sharing via peer-to-peer networks was the main traffic driver as no other services consumed large amounts of bandwidth. Fast-forward two decades and these statistics are ancient history. With the growth of video streaming, including services such as YouTube, Netflix, and TikTok, file-sharing traffic is nothing more than a drop in today's data pool. [...] This week, Canadian broadband management company Sandvine released its latest Global Internet Phenomena Report which makes it clear that BitTorrent no longer leads any charts. The latest data show that video and social media are the leading drivers of downstream traffic, accounting for more than half of all fixed access and mobile data worldwide. Needless to say, BitTorrent is nowhere to be found in the list of 'top apps'. Looking at upstream traffic, BitTorrent still has some relevance on fixed access networks where it accounts for 4% of the bandwidth. However, it's been surpassed by cloud storage apps, FaceTime, Google, and YouTube. On mobile connections, BitTorrent no longer makes it into the top ten. The average of 46 MB upstream traffic per subscriber shouldn't impress any file-sharer. However, since only a small percentage of all subscribers use BitTorrent, the upstream traffic per user is of course much higher.

Read more of this story at Slashdot.

Cisco Completes $28 Billion Acquisition of Splunk

Slashdot.org - 2 hours 48 min ago
Cisco on Monday completed its $28 billion acquisition of Splunk, a powerhouse in data analysis, security and observability tools. The deal was first announced in September 2023. SecurityWeek reports: Cisco plans to leverage Splunk's AI, security and observability capabilities complement Cisco's solution portfolio. Cisco says the transaction is expected to be cash flow positive and non-GAAP gross margin accretive in Cisco's fiscal year 2025, and non-GAAP EPS accretive in fiscal year 2026. "We are thrilled to officially welcome Splunk to Cisco," Chuck Robbins, Chair and CEO of Cisco, said in a statement. "As one of the world's largest software companies, we will revolutionize the way our customers leverage data to connect and protect every aspect of their organization as we help power and protect the AI revolution."

Read more of this story at Slashdot.

Sony Reportedly Pauses PSVR 2 Production Due To Low Sales

Slashdot.org - 3 hours 28 min ago
According to Bloomberg, Sony has paused production of its PlayStation VR 2 virtual reality headset, as sales have "slowed progressively" since its February 2023 launch. Road to VR reports: Citing people familiar with the company's plans, Sony has produced "well over 2 million units" since launch, noting that stocks of the $550 headset are building up. The report alleges the surplus is "throughout Sony's supply chain," indicating the issue isn't confined to a single location, but is spread across different stages of Sony's production and distribution network. This follows news that Sony Interactive Entertainment laid off eight percent of the company, which affected a number of its first-party game studios also involved in VR game production. Sony entirely shuttered its London Studio, which created VR action-adventure game Blood & Truth (2019), and reduced headcount at Firesprite, the studio behind PSVR 2 exclusive Horizon Call of the Mountain. Meanwhile, Sony is making PSVR 2 officially compatible with PC VR games, as the company hopes to release some sort of PC support for the headset later this year. How and when Sony will do that is still unknown, although the move underlines just how little confidence the company has in its future lineup of exclusive content just one year after launch of PSVR 2.

Read more of this story at Slashdot.

5-Year Study Finds No Brain Abnormalities In 'Havana Syndrome' Patients

Slashdot.org - 4 hours 8 min ago
An anonymous reader quotes a report from CBC News: An array of advanced tests found no brain injuries or degeneration among U.S. diplomats and other government employees who suffer mysterious health problems once dubbed "Havana syndrome," researchers reported Monday. The National Institutes of Health's (NIH) nearly five-year study offers no explanation for symptoms including headaches, balance problems and difficulties with thinking and sleep that were first reported in Cuba in 2016 and later by hundreds of American personnel in multiple countries. But it did contradict some earlier findings that raised the spectre of brain injuries in people experiencing what the State Department now calls "anomalous health incidents." "These individuals have real symptoms and are going through a very tough time," said Dr. Leighton Chan, NIH's chief of rehabilitation medicine, who helped lead the research. "They can be quite profound, disabling and difficult to treat." Yet sophisticated MRI scans detected no significant differences in brain volume, structure or white matter -- signs of injury or degeneration -- when Havana syndrome patients were compared to healthy government workers with similar jobs, including some in the same embassy. Nor were there significant differences in cognitive and other tests, according to findings published in the Journal of the American Medical Association.

Read more of this story at Slashdot.

Chinese and Western Scientists Identify 'Red Lines' on AI Risks

Slashdot.org - 4 hours 48 min ago
Leading western and Chinese AI scientists have issued a stark warning that tackling risks around the powerful technology requires global co-operation similar to the cold war effort to avoid nuclear conflict. From a report: A group of renowned international experts met in Beijing last week, where they identified "red lines" on the development of AI, including around the making of bioweapons and launching cyber attacks. In a statement seen by the Financial Times, issued in the days after the meeting, the academics warned that a joint approach to AI safety was needed to stop "catastrophic or even existential risks to humanity within our lifetimes." "In the depths of the cold war, international scientific and governmental co-ordination helped avert thermonuclear catastrophe. Humanity again needs to co-ordinate to avert a catastrophe that could arise from unprecedented technology," the statement said. Signatories include Geoffrey Hinton and Yoshua Bengio, who won a Turing Award for their work on neural networks and are often described as "godfathers" of AI; Stuart Russell, a professor of computer science at the University of California, Berkeley; and Andrew Yao, one of China's most prominent computer scientists. The statement followed the International Dialogue on AI Safety in Beijing last week, a meeting that included officials from the Chinese government in a signal of tacit official endorsement for the forum and its outcomes.

Read more of this story at Slashdot.

US Supreme Court Seems Wary of Curbing US Government Contacts With Social Media Platforms

Slashdot.org - 5 hours 28 min ago
U.S. Supreme Court justices on Monday appeared skeptical of a challenge on free speech grounds to how President Joe Biden's administration encouraged social media platforms to remove posts that federal officials deemed misinformation, including about elections and COVID-19. From a report: The justices heard oral arguments in the administration's appeal of a lower court's preliminary injunction constraining how White House and certain other federal officials communicate with social media platforms. The Republican-led states of Missouri and Louisiana, along with five individual social media users, sued the administration. They argued that the government's actions violated the U.S. Constitution's First Amendment free speech rights of users whose posts were removed from platforms such as Facebook, YouTube, and Twitter, now called X. The case tests whether the administration crossed the line from mere communication and persuasion to strong arming or coercing platforms - sometimes called "jawboning" - to unlawfully censor disfavored speech, as lower courts found.

Read more of this story at Slashdot.

Games Are Coming To LinkedIn

Slashdot.org - 6 hours 8 min ago
Soon you might be able to compete in games against friends and colleagues and even the office next door on LinkedIn. From a report: The Microsoft-owned company is reportedly planning to add a new game experience to the platform. According to TechCrunch, the experience is designed to tap into the same popularity of games like Wordle. Players' scores will be sorted by their workplace and ranked, allowing you to take on another office or even across the country. App researcher Nima Owji posted photos of the gaming experience on Twitter/X on Saturday. A representative from LinkedIn confirmed to TechCrunch that the company is working on adding puzzle-based games to the LinkedIn experience as a way to "unlock a bit of fun, deepen relationships, and hopefully spark the opportunity for conversations."

Read more of this story at Slashdot.

Investment Advisors Pay the Price For Selling What Looked a Lot Like AI Fairy Tales

Slashdot.org - 6 hours 47 min ago
Two investment advisors have reached settlements with the US Securities and Exchange Commission for allegedly exaggerating their use of AI, which in both cases were purported to be cornerstones of their offerings. From a report: Canada-based Delphia and San Francisco-headquartered Global Predictions will cough up $225,000 and $175,000 respectively for telling clients that their products used AI to improve forecasts. The financial watchdog said both were engaging in "AI washing," a term used to describe the embellishment of machine-learning capabilities. "We've seen time and again that when new technologies come along, they can create buzz from investors as well as false claims by those purporting to use those new technologies," said SEC chairman Gary Gensler. "Delphia and Global Predictions marketed to their clients and prospective clients that they were using AI in certain ways when, in fact, they were not." Delphia claimed its system utilized AI and machine learning to incorporate client data, a statement the SEC said it found to be false. "Delphia represented that it used artificial intelligence and machine learning to analyze its retail clients' spending and social media data to inform its investment advice when, in fact, no such data was being used in its investment process," the SEC said in a settlement order. Despite being warned about suspected misleading practices in 2021 and agreeing to amend them, Delphia only partially complied, according to the SEC. The company continued to market itself as using client data as AI inputs but never did anything of the sort, the regulator said.

Read more of this story at Slashdot.

Apex Legends Streamers Warned To 'Perform a Clean OS Reinstall as Soon as Possible' After Hacks During NA Finals Match

Slashdot.org - 7 hours 28 min ago
An anonymous reader shares a report: The Apex Legends Global Series is currently in regional finals mode, but the North America finals have been delayed after two players were hacked mid-match. First, Noyan "Genburten" Ozkose of DarkZero suddenly found himself able to see other players through walls, then Phillip "ImperialHal" Dosen of TSM was given an aimbot. Genburten's hack happened part of the way through the day's third match. A Twitch clip of the moment shows the words "Apex hacking global series by Destroyer2009 & R4ndom" repeating over chat as he realizes he's been given a cheat and takes his hands off the controls. "I can see everyone!" he says, before leaving the match. ImperialHal was hacked in the game immediately after that. "I have aimbot right now!" he shouts in a clip of the moment, before declaring "I can't shoot." Though he continued attempting to play out the round, the match was later abandoned. The volunteers at the Anti-Cheat Police Department have since issued a PSA announcing, "There is currently an RCE exploit being abused in [Apex Legends]" and that it could be delivered via from the game itself, or its anti-cheat protection. "I would advise against playing any games protected by EAC or any EA titles", they went on to say. As for players of the tournament, they strongly recommended taking protective measures. "It is advisable that you change your Discord passwords and ensure that your emails are secure. also enable MFA for all your accounts if you have not done it yet", they said, "perform a clean OS reinstall as soon as possible. Do not take any chances with your personal information, your PC may have been exposed to a rootkit or other malicious software that could cause further damage." The rest of the series has now been postponed, "Due to the competitive integrity of this series being compromised," as the official Twitter account announced. They finished by saying, "We will share more information soon."

Read more of this story at Slashdot.

AI-Generated Science

Slashdot.org - 8 hours 8 min ago
Published scientific papers include language that appears to have been generated by AI-tools like ChatGPT, showing how pervasive the technology has become, and highlighting longstanding issues with some peer-reviewed journals. From a report: Searching for the phrase "As of my last knowledge update" on Google Scholar, a free search tool that indexes articles published in academic journals, returns 115 results. The phrase is often used by OpenAI's ChatGPT to indicate when the data the answer it is giving users is coming from, and the specific months and years found in these academic papers correspond to previous ChatGPT "knowledge updates." "As of my last knowledge update in September 2021, there is no widely accepted scientific correlation between quantum entanglement and longitudinal scalar waves," reads a paper titled "Quantum Entanglement: Examining its Nature and Implications" published in the "Journal of Material Sciences & Manfacturing [sic] Research," a publication that claims it's peer-reviewed. Over the weekend, a tweet showing the same AI-generated phrase appearing in several scientific papers went viral. Most of the scientific papers I looked at that included this phrase are small, not well known, and appear to be "paper mills," journals with low editorial standards that will publish almost anything quickly. One publication where I found the AI-generated phrase, the Open Access Research Journal of Engineering and Technology, advertises "low publication charges," an "e-certificate" of publication, and is currently advertising a call for papers, promising acceptance within 48 hours and publication within four days.

Read more of this story at Slashdot.

Fujitsu Says It Was Hacked, Warns of Data Breach

Slashdot.org - 8 hours 48 min ago
Multinational technology giant Fujitsu confirmed a cyberattack in a statement Friday, and warned that hackers may have stolen personal data and customer information. From a report: "We confirmed the presence of malware on multiple work computers at our company, and as a result of an internal investigation, we discovered that files containing personal information and customer information could be illegally taken out," said Fujitsu in its statement on its website, translated from Japanese. Fujitsu said it disconnected the affected systems from its network, and is investigating how its network was compromised by malware and "whether information has been leaked." The tech conglomerate did not specify what kind of malware was used, or the nature of the cyberattack. Fujitsu also did not say what kind of personal information may have been stolen, or who the personal information pertains to -- such as its employees, corporate customers, or citizens whose governments use the company's technologies.

Read more of this story at Slashdot.

Google Researchers Unveil 'VLOGGER', an AI That Can Bring Still Photos To Life

Slashdot.org - 9 hours 28 min ago
Google researchers have developed a new AI system that can generate lifelike videos of people speaking, gesturing and moving -- from just a single still photo. From a report: The technology, called VLOGGER, relies on advanced machine learning models to synthesize startlingly realistic footage, opening up a range of potential applications while also raising concerns around deepfakes and misinformation. Described in a research paper titled "VLOGGER: Multimodal Diffusion for Embodied Avatar Synthesis," (PDF) the AI model can take a photo of a person and an audio clip as input, and then output a video that matches the audio, showing the person speaking the words and making corresponding facial expressions, head movements and hand gestures. The videos are not perfect, with some artifacts, but represent a significant leap in the ability to animate still images. The researchers, led by Enric Corona at Google Research, leveraged a type of machine learning model called diffusion models to achieve the novel result. Diffusion models have recently shown remarkable performance at generating highly realistic images from text descriptions. By extending them into the video domain and training on a vast new dataset, the team was able to create an AI system that can bring photos to life in a highly convincing way. "In contrast to previous work, our method does not require training for each person, does not rely on face detection and cropping, generates the complete image (not just the face or the lips), and considers a broad spectrum of scenarios (e.g. visible torso or diverse subject identities) that are critical to correctly synthesize humans who communicate," the authors wrote.

Read more of this story at Slashdot.

Empowering the Retail Media Ecosystem with Search Ads 360Empowering the Retail Media Ecosystem with Search Ads 360Product Manager

GoogleBlog - 9 hours 38 min ago
We’re adding offsite retail media capabilities to SA360 to help retailers and brands sell more products together.We’re adding offsite retail media capabilities to SA360 to help retailers and brands sell more products together.
Categories: Technology

Grok AI Goes Open Source

Slashdot.org - 10 hours 8 min ago
xAI has opened sourced its large language model Grok. From a report: The move, which Musk had previously proclaimed would happen this week, now enables any other entrepreneur, programmer, company, or individual to take Grok's weights -- the strength of connections between the model's artificial "neurons," or software modules that allow the model to make decisions and accept inputs and provide outputs in the form of text -- and other associated documentation and use a copy of the model for whatever they'd like, including for commercial applications. "We are releasing the base model weights and network architecture of Grok-1, our large language model," the company announced in a blog post. "Grok-1 is a 314 billion parameter Mixture-of-Experts model trained from scratch by xAI." Those interested can download the code for Grok on its Github page or via a torrent link. Parameters refers to the weights and biases that govern the model -- the more parameters, generally the more advanced, complex and performant the model is. At 314 billion parameters, Grok is well ahead of open source competitors such as Meta's Llama 2 (70 billion parameters) and Mistral 8x7B (12 billion parameters). Grok was open sourced under an Apache License 2.0, which enables commercial use, modifications, and distribution, though it cannot be trademarked and there is no liability or warranty that users receive with it. In addition, they must reproduce the original license and copyright notice, and state the changes they've made.

Read more of this story at Slashdot.

Syndicate content
Comment