Many users express frustration with the increasing presence of AI technologies in everyday applications, often feeling they did not request these features and cannot avoid interacting with them. The integration of AI in products from companies like Google (AI overviews), Meta (MetaAI), Microsoft (built-in copilot button), and Apple (Apple Intelligence) raises significant data privacy concerns, as these companies utilize user data to enhance their AI systems. Although there are options available for users to opt out of data collection and disable certain AI features, it all depends on the company and region. Full Story
Author: Ian Vielma
Amdocs, a software company, surveyed 500 Generation Z employees on questions regarding AI. The survey revealed that 50% of Gen Z employees would consider leaving their jobs if their company does not provide training on generative AI, as they believe it impacts their career growth and skills. While 90% of employees report AI training offerings, there is a notable difference between sectors: 47% of tech workers see AI training as a priority, compared to only 34% of non-tech workers. The report highlights a skills gap in AI proficiency, with 80% of Gen Z feeling proficient in generative AI, yet only…
Just last month, California introduced a bill (S.B 1047) that focused on regulating some aspects of AI. The bill, endorsed by OpenAI, Adobe, and Microsoft, was recently vetoed by California Gov. Gavin Newsom. He stated “I do not believe this is the best approach to protecting the public from real threats posed by the technology”. Newsom plans to work closely with experts to develop safety measures and propose a new bill at a later date. Full Story
OpenAI, the creator of ChatGPT, has launched a new AI model, o1, designed to solve complex problems in science, coding, and math more effectively than previous versions, with regular updates planned for users. The o1 model demonstrates significant advancements and is claimed to be “performing at levels comparable to PhD students” in various scientific disciplines. Full Story
A recent survey by the State Educational Technology Directors Association indicated a significant rise in interest among educators regarding AI, with 90% of state education officials expressing the need for guidance on AI policies, compared to just 55% the previous year. Nearly half of U.S. states have developed some form of AI guidance, and 14% are working on broader AI policy initiatives, including teacher training and AI literacy. Pat Yongpradit, chief academic officer for Code.org and a leader of TeachAI, stated that the efforts are “a solid start,” but there is still much to be done to ensure the safety…
In a survey consisting of 930 non-profit organizations, more than two-thirds have already experimented with artificial intelligence. However, they don’t use it on a day-to-day basis because of concerns about data breaches and bias. Some of the ways in which they use AI: Translate/Transcribe As a Virtual Assistant Interpreting Data Organizing Data Making Predictions From the source, here is a chart representing what the 930 non-profit organizations answered when asked about the risks versus rewards of using AI. Full Story
Companies like Synechron and USAA are implementing large-scale AI training programs to upskill their employees. Among their strategies, these companies have started to host hackathons, which are used to give employees experience with AI tools, explore new use cases, and gain familiarity with the technology. Some companies offer training programs with flexible guidelines, depending on the size and needs of their workforce. Full Story
Meta has admitted that it scrapes public data from Australian users’ profiles, including photos and posts, to train its AI models and does not offer an opt-out option for Australians, unlike in the European Union. The Australian government is considering a ban on social media for children and plans to reform outdated privacy laws due to concerns over privacy protection and the exploitation of user data by companies like Meta. Full Story
OpenAI, Adobe, and Microsoft has recently endorsed California’s Bill (AB 3211), which would require AI- generated content to be watermarked. There are concerns about enforced this would be. A trade group representing Microsoft and Adobe initially opposed this bill, calling it “unworkable” and “overly burdensome”. Most recently this trade group has accepted and decided to go along with it. Full Story
There is a concern over using public records for AI training. This stems from the possibility of biased and inaccurate outcomes due to the data’s inherent flaws. Our current data privacy laws are inadequate as personal information is often retained too long and exposed, leading to identity theft and misuse. We are in need of stricter regulations to control data collection, retention, and AI use, ensuring better transparency and protection of personal information. Full Story