Daily Tech Digest - February 04, 2025


Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie


Technology skills gap plagues industries, and upskilling is a moving target

“The deepening threat landscape and rapidly evolving high-momentum technologies like AI are forcing organizations to move with lightning speed to fill specific gaps in their job architectures, and too often they are stumbling,” said David Foote, chief analyst at consultancy Foote Partners. To keep up with the rapidly changing landscape, Gartner suggests that organizations invest in agile learning for tech teams. “In the context of today’s AI-fueled accelerated disruption, many business leaders feel learning is too slow to respond to the volume, variety and velocity of skills needs,” said Chantal Steen, a senior director in Gartner’s HR practice. “Learning and development must become more agile to respond to changes faster and deliver learning more rapidly and more cost effectively.” Studies from staffing firm ManpowerGroup, hiring platform Indeed, and Deloitte consulting show that tech hiring will focus on candidates with flexible skills to meet evolving demands. “Employers know a skilled and adaptable workforce is key to navigating transformation, and many are prioritizing hiring and retaining people with in-demand flexible skills that can flex to where demand sits,” said Jonas Prising, ManpowerGroup chair and CEO.


Mixture of Experts (MoE) Architecture: A Deep Dive & Comparison of Top Open-Source Offerings

The application of MoE to open-source LLMs offers several key advantages. Firstly, it enables the creation of more powerful and sophisticated models without incurring the prohibitive costs associated with training and deploying massive, single-model architectures. Secondly, MoE facilitates the development of more specialized and efficient LLMs, tailored to specific tasks and domains. This specialization can lead to significant improvements in performance, accuracy, and efficiency across a wide range of applications, from natural language translation and code generation to personalized education and healthcare. The open-source nature of MoE-based LLMs promotes collaboration and innovation within the AI community. By making these models accessible to researchers, developers, and businesses, MoE fosters a vibrant ecosystem of experimentation, customization, and shared learning. ... Integrating MoE architecture into open-source LLMs represents a significant step forward in the evolution of artificial intelligence. By combining the power of specialization with the benefits of open-source collaboration, MoE unlocks new possibilities for creating more efficient, powerful, and accessible AI models that can revolutionize various aspects of our lives.


The DeepSeek Disruption and What It Means for CIOs

The emergence of DeepSeek has also revived a long-standing debate about open-source AI versus proprietary AI. Open-source AI is not a silver bullet. CIOs need to address critical risks as open-source AI models, if not secured properly, can be exposed to grave cyberthreats and adversarial attacks. While DeepSeek currently shows extraordinary efficiency, it requires an internal infrastructure, unlike GPT-4, which can seamlessly scale on OpenAI's cloud. Open-source AI models lack support and skills, thereby mandating users to build their own expertise, which could be demanding. "What happened with DeepSeek is actually super bullish. I look at this transition as an opportunity rather than a threat," said Steve Cohen, founder of Point72. ... The regulatory non-compliance adds another challenge as many governments restrict and disallow sensitive enterprise data from being processed by Chinese technologies. A possibility of potential backdoor can't be ruled out and this could open the enterprises to additional risks. CIOs need to conduct extensive security audits before deploying DeepSeek. rganizations can implement safeguards such as on-premises deployment to avoid data exposure. Integrating strict encryption protocols can help the AI interactions remain confidential, and performing rigorous security audits ensure the model's safety before deploying it into business workflows.


Why GreenOps will succeed where FinOps is failing

The cost-control focus fails to engage architects and engineers in rethinking how systems are designed, built and operated for greater efficiency. This lack of engagement results in inertia and minimal progress. For example, the database team we worked with in an organization new to the cloud launched all the AWS RDS database servers from dev through production, incurring a $600K a month cloud bill nine months before the scheduled production launch. The overburdened team was not thinking about optimizing costs, but rather optimizing their own time and getting out of the way of the migration team as quickly as possible. ... GreenOps — formed by merging FinOps, sustainability and DevOps — addresses the limitations of FinOps while integrating sustainability as a core principle. Green computing contributes to GreenOps by emphasizing energy-efficient design, resource optimization and the use of sustainable technologies and platforms. This foundational focus ensures that every system built under GreenOps principles is not only cost-effective but also minimizes its environmental footprint, aligning technological innovation with ecological responsibility. Moreover, we’ve found that providing emissions feedback to architects and engineers is a bigger motivator than cost to inspire them to design more efficient systems and build automation to shut down underutilized resources.


Best Practices for API Rate Limits and Quotas

Unlike short-term rate limits, the goal of quotas is to enforce business terms such as monetizing your APIs and protecting your business from high-cost overruns by customers. They measure customer utilization of your API over longer durations, such as per hour, per day, or per month. Quotas are not designed to prevent a spike from overwhelming your API. Rather, quotas regulate your API’s resources by ensuring a customer stays within their agreed contract terms. ... Even a protection mechanism like rate limiting could have errors. For example, a bad network connection with Redis could cause reading rate limit counters to fail. In such scenarios, it’s important not to artificially reject all requests or lock out users even though your Redis cluster is inaccessible. Your rate-limiting implementation should fail open rather than fail closed, meaning all requests are allowed even though the rate limit implementation is faulting. This also means rate limiting is not a workaround to poor capacity planning, as you should still have sufficient capacity to handle these requests or even design your system to scale accordingly to handle a large influx of new requests. This can be done through auto-scale, timeouts, and automatic trips that enable your API to still function.


Protecting Ultra-Sensitive Health Data: The Challenges

Protecting ultra-sensitive information "is an incredibly confusing and complicated and evolving part of the law," said regulatory attorney Kirk Nahra of the law firm WilmerHale. "HIPAA generally does not distinguish between categories of health information," he said. "There are exceptions - including the recent Dobbs rule - but these are not fundamental in their application, he said. Privacy protections related to abortion procedures are perhaps the most hotly debated type of patient information. For instance, last June - in response to the June 2022 Supreme Court's Dobbs ruling, which overturned the national right to abortion - the Biden administration's U.S. Department of Health and Human Services modified the HIPAA Privacy Rule to add additional safeguards for the access, use and disclosure of reproductive health information. The rule is aimed at protecting women from the use or disclosure of their reproductive health information when it is sought to investigate or impose liability on individuals, healthcare providers or others who seek, obtain, provide or facilitate reproductive healthcare that is lawful under the circumstances in which such healthcare is provided. But that rule is being challenged in federal court by 15 state attorneys general seeking to revoke the regulations.


Evolving threat landscape, rethinking cyber defense, and AI: Opportunties and risk

Businesses are firmly in attackers’ crosshairs. Financially motivated cybercriminals conduct ransomware attacks with record-breaking ransoms being paid by companies seeking to avoid business interruption. Others, including nation-state hackers, infiltrate companies to steal intellectual property and trade secrets to gain commercial advantage over competitors. Further, we regularly see critical infrastructure being targeted by nation-state cyberattacks designed to act as sleeper cells that can be activated in times of heightened tension. Companies are on the back foot. ... As zero trust disrupts obsolete firewall and VPN-based security, legacy vendors are deploying firewalls and VPNs as virtual machines in the cloud and calling it zero trust architecture. This is akin to DVD hardware vendors deploying DVD players in a data center and calling it Netflix! It gives a false sense of security to customers. Organizations need to make sure they are really embracing zero trust architecture, which treats everyone as untrusted and ensures users connect to specific applications or services, rather than a corporate network. ... Unfortunately, the business world’s harnessing of AI for cyber defense has been slow compared to the speed of threat actors harnessing it for attacks. 


Six essential tactics data centers can follow to achieve more sustainable operations

By adjusting energy consumption based on real-time demand, data centers can significantly enhance their operational efficiency. For example, during periods of low activity, power can be conserved by reducing energy use, thus minimizing waste without compromising performance. This includes dynamic power management technologies in switch and router systems, such as shutting down unused line cards or ports and controlling fan speeds to optimize energy use based on current needs. Conversely, during peak demand, operations can be scaled up to meet increased requirements, ensuring consistent and reliable service levels. Doing so not only reduces unnecessary energy expenditure, but also contributes to sustainability efforts by lowering the environmental impact associated with energy-intensive operations. ... Heat generated from data center operations can be captured and repurposed to provide heating for nearby facilities and homes, transforming waste into a valuable resource. This approach promotes a circular energy model, where excess heat is redirected instead of discarded, reducing the environmental impact. Integrating data centers into local energy systems enhances sustainability and offers tangible benefits to surrounding areas and communities whilst addressing broader energy efficiency goals.


The Engineer’s Guide to Controlling Configuration Drift

“Preventing configuration drift is the bedrock for scalable, resilient infrastructure,” comments Mayank Bhola, CTO of LambdaTest, a cloud-based testing platform that provides instant infrastructure. “At scale, even small inconsistencies can snowball into major operational inefficiencies. We encountered these challenges [user-facing impact] as our infrastructure scaled to meet growing demands. Tackling this challenge head-on is not just about maintaining order; it’s about ensuring the very foundation of your technology is reliable. And so, by treating infrastructure as code and automating compliance, we at LambdaTest ensure every server, service, and setting aligns with our growth objectives, no matter how fast we scale. Adopting drift detection and remediation strategies is imperative for maintaining a resilient infrastructure. ... The policies you set at the infrastructure level, such as those for SSH access, add another layer of security to your infrastructure. Ansible allows you to define policies like removing root access, changing the default SSH port, and setting user command permissions. “It’s easy to see who has access and what they can execute,” Kampa remarks. “This ensures resilient infrastructure, keeping things secure and allowing you to track who did what if something goes wrong.”


Strategies for mitigating bias in AI models

The need to address bias in AI models stems from the fundamental principle of fairness. AI systems should treat all individuals equitably, regardless of their background. However, if the training data reflects existing societal biases, the model will likely reproduce and even exaggerate those biases in its outputs. For instance, if a facial recognition system is primarily trained on images of one demographic, it may exhibit lower accuracy rates for other groups, potentially leading to discriminatory outcomes. Similarly, a natural language processing model trained on predominantly Western text may struggle to understand or accurately represent nuances in other languages and cultures. ... Incorporating contextual data is essential for AI systems to provide relevant and culturally appropriate responses. Beyond basic language representation, models should be trained on datasets that capture the history, geography, and social issues of the populations they serve. For instance, an AI system designed for India should include data on local traditions, historical events, legal frameworks, and social challenges specific to the region. This ensures that AI-generated responses are not only accurate but also culturally sensitive and context-aware. Additionally, incorporating diverse media formats such as text, images, and audio from multiple sources enhances the model’s ability to recognise and adapt to varying communication styles.

Daily Tech Digest - February 03, 2025


Quote for the day:

"Knowledge is being aware of what you can do. Wisdom is knowing when not to do it." -- Anonymous


The CISO’s role in advancing innovation in cybersecurity

CISOs must know the risks of adopting untested solutions, keeping in mind their organization’s priorities and learning how to evaluate new tools and technologies. “We also ensure both parties have clear, shared goals from the start, so we avoid misunderstandings and set everyone up for success,” Maor tells CSO. ... It’s a golden era of cybersecurity innovation driven by emerging cybersecurity threats, but it’s a tale of two companies, according to Perlroth. AI is attracting significant amounts of funding while it’s harder for many other types of startups. Cybersecurity companies continue to get a lot of interest from venture capital (VC) firms, although she’s seeing founders themselves eschewing big general funds in favor of funds and investors with industry knowledge. “Startup founders frequently want to work with venture capitalists who have some kind of specific value add or cyber expertise,” says Perlroth. In this environment, there’s more potential for CISOs to be involved and those with an appetite for the business side of cyber innovation can look for opportunities to advise and invest in new businesses. Cyber-focused venture capital (VC) firms often engage CISOs to participate in advisory panels and assist with due diligence when vetting startups, according to Haleliuk. 


The risks of supply chain cyberattacks on your organisation

Organisations need to ensure they take steps to prevent the risk of key suppliers falling victim to cyberattacks. A good starting point is to work out just where they are most exposed, says Lorri Janssen-Anessi, director of external cyber assessments at BlueVoyant. “Understand your external attack surface and third-party integrations to ensure there are no vulnerabilities,” she urges. “Consider segmentation of critical systems and minimise the blast radius of a breach. Identify the critical vendors or suppliers and ensure those important digital relationships have stricter security practices in place.” Bob McCarter, CTO at NAVEX, believes there needs to be a stronger emphasis on cybersecurity when selecting and reviewing suppliers. “Suppliers need to have essential security controls including multi-factor authentication, phishing education and training, and a Zero Trust framework,” he says. “To avoid long-term financial loss, they must also adhere to relevant cybersecurity regulations and industry standards.” But it’s also important to regularly perform risk assessments, even once the relationship is established, says Janssen-Anessi. “The supply chain ecosystem is not static,” she warns. “Networks and systems are constantly changing to ensure usability. To stay ahead of vulnerabilities or risks that may pop up, it is important to continuously monitor these suppliers.”


Deepseek's AI model proves easy to jailbreak - and worse

On Thursday, Unit 42, a cybersecurity research team at Palo Alto Networks, published results on three jailbreaking methods it employed against several distilled versions of DeepSeek's V3 and R1 models. ... "Our research findings show that these jailbreak methods can elicit explicit guidance for malicious activities," the report states. "These activities include keylogger creation, data exfiltration, and even instructions for incendiary devices, demonstrating the tangible security risks posed by this emerging class of attack." Researchers were able to prompt DeepSeek for guidance on how to steal and transfer sensitive data, bypass security, write "highly convincing" spear-phishing emails, conduct "sophisticated" social engineering attacks, and make a Molotov cocktail. They were also able to manipulate the models into creating malware. ... "While information on creating Molotov cocktails and keyloggers is readily available online, LLMs with insufficient safety restrictions could lower the barrier to entry for malicious actors by compiling and presenting easily usable and actionable output," the paper adds. ... "By circumventing standard restrictions, jailbreaks expose how much oversight AI providers maintain over their own systems, revealing not only security vulnerabilities but also potential evidence of cross-model influence in AI training pipelines," it continues.


10 skills and traits of successful digital leaders

An important skill for CIOs is strategic thinking, which means adopting a “why” mindset, notesGill Haus, CIO of consumer and community banking at JPMorgan Chase. “I ask questions all the time — even on subjects I think I’m most knowledgeable about,” Haus says. “When others see their leader asking questions, even in the company of more senior leaders, it creates a welcoming atmosphere that encourages everyone to feel safe doing the same. ... Effective leaders have a clear vision of what technology can do for their organization as well as a solid understanding of it, agrees Stephanie Woerner, director and principal research scientist at the MIT’s Center for Information Systems Research (CISR). “They think about the new things they can do with technology, different ways of getting work done or engaging with customers, and how technology enables that.” ... Being able to translate complex technical concepts into clear business value while also maintaining realistic implementation timelines is another important skill. Tech leaders are up to their eyeballs in data, systems, and processes, but all users want is that a product works. A strong digital leader should constantly ask themselves how they can make something easier for their customers. 


Prompt Injection for Large Language Models

Many businesses put all of their secrets into the system prompt, and if you're able to steal that prompt, you have all of their secrets. Some of the companies are a bit more clever, and they put their data into files that are then put into the context or referenced by the large language model. In these cases, you can just ask the model to provide you links to download the documents it knows about. Sometimes there are interesting URLs pointing to internal documents, such as Jira, Confluence, and the like. You can learn about the business and its data that it has available. That can be really bad for the business. Another thing you might want to do with these prompt injections is to gain personal advantages. Imagine a huge company, and they have a big HR department, they receive hundreds of job applications every day, so they use an AI based tool to evaluate which candidates are a fit for the open position. ... Another approach to make your models less sensitive to prompt injection and prompt stealing is to fine-tune them. Fine-tuning basically means you take a large language model that has been trained by OpenAI, Meta, or some other vendor, and you retrain it with additional data to make it more suitable for your use case.


The hidden dangers of a toxic cybersecurity workplace

Certain roles in cybersecurity are more vulnerable to toxic environments due to the nature of their responsibilities and visibility within the organization. SOC analysts, for instance, are often on the frontlines, dealing with high-pressure situations like incident response and threat mitigation. The expectation to always be “on” can lead to burnout, especially in a culture that prioritizes output over well-being. Similarly, CISOs face unique challenges as they balance technical, strategic, and political pressures. They’re often caught between managing expectations from the C-suite and addressing operational realities. CISO burnout is very real, driven in part by the immense liability and scrutiny associated with the role. The constant pressure, combined with the growing complexity of threats, leads many CISOs to leave their positions, with some even vowing, “never again will I do this job.” This trend is tragic, as organizations lose experienced leaders who play a critical role in shaping cybersecurity strategies. ... Leaders play a crucial role in fostering a positive culture and must take proactive steps to address toxicity. They should prioritize open communication and actively solicit feedback from their teams on a regular basis. Anonymous surveys, one-on-one meetings, and team discussions can help identify pain points. 


The Cultural Backlash Against Generative AI

Part of the problem is that generative AI really can’t effectively do everything the hype claims. An LLM can’t be reliably used to answer questions, because it’s not a “facts machine”. It’s a “probable next word in a sentence machine”. But we’re seeing promises of all kinds that ignore these limitations, and tech companies are forcing generative AI features into every kind of software you can think of. People hated Microsoft’s Clippy because it wasn’t any good and they didn’t want to have it shoved down their throats — and one might say they’re doing the same basic thing with an improved version, and we can see that some people still understandably resent it. When someone goes to an LLM today and asks for the price of ingredients in a recipe at their local grocery store right now, there’s absolutely no chance that model can answer that correctly, reliably. That is not within its capabilities, because the true data about those prices is not available to the model. The model might accidentally guess that a bag of carrots is $1.99 at Publix, but it’s just that, an accident. In the future, with chaining models together in agentic forms, there’s a chance we could develop a narrow model to do this kind of thing correctly, but right now it’s absolutely bogus. But people are asking LLMs these questions today! And when they get to the store, they’re very disappointed about being lied to by a technology that they thought was a magic answer box.


Developers: The Last Line of Defense Against AI Risks

Considering security early in the software development lifecycle has not traditionally been a standard practice amongst developers. Of course, this oversight is a goldmine for cybercriminals who exploit ML models to inject harmful malware into software. The lack of security training for developers makes the issue worse, particularly when AI-generated code, trained on potentially insecure open source data, is not adequately screened for vulnerabilities. Unfortunately, once AI/ML models integrate such code, the potential for undetected exploits only increases. Therefore, developers must also function as security champions, and DevOps and Security can no longer be considered separate functions. ... As AI continues to be implemented at scale by different teams, the need for advanced security in ML models is key. Enter the “Shift Left” approach, which advocates for integrating security measures early in the software lifecycle to get ahead and prevent as many future vulnerabilities as possible and ensure comprehensive security throughout the development process. This strategy is critical in AI/ML development, before they’re even deployed, to ensure the security and compliance of code and models, which often come from external sources and sometimes cannot be trusted.


How Leaders Can Leverage AI For Data Management And Decision-Making

“The real challenge isn’t just the cost of storing data—it’s making sense of it,” explains Nilo Rahmani, CEO of Thoras.ai. “An estimated 80% of incident resolution time is spent simply identifying the root cause, which is a costly inefficiency that AI can help solve.” AI-powered analytics can detect patterns, predict failures, and automate troubleshooting, reducing downtime and improving reliability. By leveraging AI, companies can streamline their data operations while increasing speed and accuracy in decision-making. Effective data management extends beyond simple storage—it requires real-time intelligence to ensure organizations are using the right data at the right time. AI plays a critical role in distinguishing meaningful data from noise, helping companies focus on insights that drive growth. ... AI is poised to revolutionize data management, but success will depend on how well organizations integrate it into their existing frameworks. Companies that embrace AI-driven automation, predictive analytics, and proactive infrastructure management will not only reduce costs but also gain a competitive edge by making faster, smarter decisions. Leaders must shift their focus from simply collecting and storing data to using it intelligently. 


Ramping Up AI Adoption in Local Government

One of the biggest barriers stopping local authorities from embracing AI is the lack of knowledge and misunderstanding around the technology. For many years the fear of the unknown has caused confusion, with numerous news articles claiming modern technology poses a threat to humanity. This could not be further from the truth. ... One key area that is missing from the AI Opportunities Actions Plan is managing and upskilling workers. People are core to every transformation, even ones that are digitally focused. To truly unlock the power of AI, employees need to be supported and trained in a judgement free space, allowing them to disclose any concerns or areas of support. After years of fear-mongering some employees may be more hesitant to engage with an AI transformation. Therefore, it’s up to leaders to adopt a top-down approach to promoting and embracing AI in the workplace. To begin, a skills audit should be conducted, assessing the existing knowledge and experiences with AI-related skills. Based on this, customised training plans can be developed to ensure everyone within the organisation feels supported and confident. It’s important for leaders to emphasise that a digital transformation doesn’t mean job cuts, but rather, takes away the time-consuming jobs and allows staff to focus on higher value, creative and strategic work.

Daily Tech Digest - February 01, 2025


Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Laundry


5 reasons the enterprise data center will never die

Cloud repatriation — enterprises pulling applications back from the cloud to the data center — remains a popular option for a variety of reasons. According to a June 2024 IDC survey, about 80% of 2,250 IT decision-maker respondents “expected to see some level of repatriation of compute and storage resources in the next 12 months.” IDC adds that the six-month period between September 2023 and March 2024 saw increased levels of repatriation plans “across both compute and storage resources for AI lifecycle, business apps, infrastructure, and database workloads.” ... According to Forrester’s 2023 Infrastructure Cloud Survey, 79% of roughly 1,300 enterprise cloud decision-makers said their firms are implementing internal private clouds, which will use virtualization and private cloud management. Nearly a third (31%) of respondents said they are building internal private clouds using hybrid cloud management solutions such as software-defined storage and API-consistent hardware to make the private cloud more like the public cloud, Forrester adds. ... “Edge is a crucial technology infrastructure that extends and innovates on the capabilities found in core datacenters, whether enterprise- or service-provider-oriented,” says IDC. The rise of edge computing shatters the binary “cloud-or-not-cloud” way of thinking about data centers and ushers in an “everything everywhere all at once” distributed model


How to Understand and Manage Cloud Costs with a Data-Driven Strategy

Understanding your cloud spend starts with getting serious about data. If your cloud usage grew organically across teams over time, you're probably staring at a bill that feels more like a puzzle than a clear financial picture. You know you're paying too much, and you have an idea of where the spending is happening across compute, storage, and networking, but you are not sure which teams are overspending, which applications are being overprovisioned, and so on. Multicloud environments add even another layer of complexity to data visibility. ... With a holistic view of your data established, the next step is augmenting tools to gain a deeper understanding of your spending and application performance. To achieve this, consider employing a surgical approach by implementing specialized cost management and performance monitoring tools that target specific areas of your IT infrastructure. For example, granular financial analytics can help you identify and eliminate unnecessary expenses with precision. Real-time visibility tools provide immediate insights into cost anomalies and performance issues, allowing for prompt corrective actions. Governance features ensure that spending aligns with budgetary constraints and compliance requirements, while integration capabilities with existing systems facilitate seamless data consolidation and analysis across different platforms. 


Top cybersecurity priorities for CFOs

CFOs need to be aware of the rising threats of cyber extortion, says Charles Soranno, a managing director at global consulting firm Protiviti. “Cyber extortion is a form of cybercrime where attackers compromise an organization’s systems, data or networks and demand a ransom to return to normal and prevent further damage,” he says. Beyond a ransomware attack, where data is encrypted and held hostage until the ransom is paid, cyber extortion can involve other evolving threats and tactics, Soranno says. “CFOs are increasingly concerned about how these cyber extortion schemes impact lost revenue, regulatory fines [and] potential payments to bad actors,” he says. ... “In collaboration with other organizational leaders, CFOs must assess the risks posed by these external partners to identify vulnerabilities and implement a proactive mitigation and response plan to safeguard from potential threats and issues.” While a deep knowledge of the entire supply chain’s cybersecurity posture might seem like a luxury for some organizations, the increasing interconnectedness of partner relationships is making third-party cybersecurity risk profiles more of a necessity, Krull says. “The reliance on third-party vendors and cloud services has grown exponentially, increasing the potential for supply chain attacks,” says Dan Lohrmann, field CISO at digital services provider Presidio. 


GDPR authorities accused of ‘inactivity’

The idea that the GDPR has brought about a shift towards a serious approach to data protection has largely proven to be wishful thinking, according to a statement from noyb. “European data protection authorities have all the necessary means to adequately sanction GDPR violations and issue fines that would prevent similar violations in the future,” Schrems says. “Instead, they frequently drag out the negotiations for years — only to decide against the complainant’s interests all too often.” ... “Somehow it’s only data protection authorities that can’t be motivated to actually enforce the law they’re entrusted with,” criticizes Schrems. “In every other area, breaches of the law regularly result in monetary fines and sanctions.” Data protection authorities often act in the interests of companies rather than the data subjects, the activist suspects. It is precisely fines that motivate companies to comply with the law, reports the association, citing its own survey. Two-thirds of respondents stated that decisions by the data protection authority that affect their own company and involve a fine lead to greater compliance. Six out of ten respondents also admitted that even fines imposed on other organizations have an impact on their own company. 


The three tech tools that will take the heat off HR teams in 2025

As for the employee review process, a content services platform enables HR employees to customise processes, routing approvals to the right managers, department heads, and people ops. This means that employee review processes can be expedited thanks to customisable forms, with easier goal setting, identification of upskilling opportunities, and career progression. When paperwork and contracts are uniform, customisable, and easily located, employers are equipped to support their talent to progress as quickly as possible – nurturing more fulfilled employees who want to stick around. ... Naturally, a lot of HR work is form-heavy, with anything from employee onboarding and promotions to progress reviews and remote working requests requiring HR input. However, with a content services platform, HR professionals can route and approve forms quickly, speeding up the process with digital forms that allow employees to enter information quickly and accurately. Going one step further, HR leaders can leverage automated workflows to route forms to approvers as soon as an employee completes them – cutting out the HR intermediary. ... Armed with a single source of truth, HR professionals can take advantage of automated workflows, enabling efficient notifications and streamlining HR compliance processes.


AI Could Turn Against You — Unless You Fix Your Data Trust Issues

Without unified standards for data formats, definitions, and validations, organizations struggle to establish centralized control. Legacy systems, often ill-equipped to handle modern data volumes, further exacerbate the problem. These systems were designed for periodic updates rather than the continuous, real-time streams demanded by AI, leading to inefficiencies and scalability limitations. To address these challenges, organizations must implement centralized governance, quality, and observability within a single framework. This enables them to leverage data lineage and track their data as it moves through systems to ensure transparency and identify issues in real-time. It also ensures they can regularly validate data integrity to support consistent, reliable AI models by conducting real-time quality checks. ... For organizations to maximize the potential of AI, they must embed data trust into their daily operations. This involves using automated systems like data observability to validate data integrity throughout its lifecycle, integrated governance to maintain reliability, and assuring continuous validation within evolving data ecosystems. By addressing data quality challenges and investing in unified platforms, organizations can transform data trust into a strategic advantage. 


Backdoor in Chinese-made healthcare monitoring device leaks patient data

“By reviewing the firmware code, the team determined that the functionality is very unlikely to be an alternative update mechanism, exhibiting highly unusual characteristics that do not support the implementation of a traditional update feature,” CISA said in its analysis report. “For example, the function provides neither an integrity checking mechanism nor version tracking of updates. When the function is executed, files on the device are forcibly overwritten, preventing the end customer — such as a hospital — from maintaining awareness of what software is running on the device.” In addition to this hidden remote code execution behavior, CISA also found that once the CMS8000 completes its startup routine, it also connects to that same IP address over port 515, which is normally associated with the Line Printer Daemon (LPD), and starts transmitting patient information without the device owner’s knowledge. “The research team created a simulated network, created a fake patient profile, and connected a blood pressure cuff, SpO2 monitor, and ECG monitor peripherals to the patient monitor,” the agency said. “Upon startup, the patient monitor successfully connected to the simulated IP address and immediately began streaming patient data to the address.”


3 Considerations for Mutual TLS (mTLS) in Cloud Security

Traditional security approaches often rely on IP whitelisting as a primary method of access control. While this technique can provide a basic level of security, IP whitelists operate on a fundamentally flawed assumption: that IP addresses alone can accurately represent trusted entities. In reality, this approach fails to effectively model real-world attack scenarios. IP whitelisting provides no mechanism for verifying the integrity or authenticity of the connecting service. It merely grants access based on network location, ignoring crucial aspects of identity and behavior. In contrast, mTLS addresses these shortcomings by focusing on cryptographic identity(link is external) rather than network location. ... In the realm of mTLS, identity is paramount. It's not just about encrypting data in transit; it's about ensuring that both parties in a communication are exactly who they claim to be. This concept of identity in mTLS warrants careful consideration. In a traditional network, identity might be tied to an IP address or a shared secret. But, in the modern world of cloud-native applications, these concepts fall short. mTLS shifts the mindset by basing identity on cryptographic certificates. Each service possesses its own unique certificate, which serves as its identity card.


Artificial Intelligence Versus the Data Engineer

It’s worth noting that there is a misconception that AI can prepare data for AI, when the reality is that, while AI can accelerate the process, data engineers are still needed to get that data in shape before it reaches the AI processes and models and we see the cool end results. At the same time, there are AI tools that can certainly accelerate and scale the data engineering work. So AI is both causing and solving the challenge in some respects! So, how does AI change the role of the data engineer? Firstly, the role of the data engineer has always been tricky to define. We sit atop a large pile of technology, most of which we didn’t choose or build, and an even larger pile of data we didn’t create, and we have to make sense of the world. Ostensibly, we are trying to get to something scientific. ... That art comes in the form of the intuition required to sift through the data, understand the technology, and rediscover all the little real-world nuances and history that over time have turned some lovely clean data into a messy representation of the real world. The real skill great data engineers have is therefore not the SQL ability but how they apply it to the data in front of them to sniff out the anomalies, the quality issues, the missing bits and those historical mishaps that must be navigated to get to some semblance of accuracy.


How engineering teams can thrive in 2025

Adopting a "fail forward" mentality is crucial as teams experiment with AI and other emerging technologies. Engineering teams are embracing controlled experimentation and rapid iteration, learning from failures and building knowledge. ... Top engineering teams will combine emerging technologies with new ways of working. They’re not just adopting AI—they’re rethinking how software is developed and maintained as a result of it. Teams will need to stay agile to lead the way. Collaboration within the business and access to a multidisciplinary talent base is the recipe for success. Engineering teams should proactively scenario plan to manage uncertainty by adopting agile frameworks like the "5Ws" (Who, What, When, Where, and Why.) This approach allows organizations to tailor tech adoption strategies and marry regulatory compliance with innovation. Engineering teams should also actively address AI bias and ensure fair and responsible AI deployment. Many enterprises are hiring responsible AI specialists and ethicists as regulatory standards are now in force, including the EU AI Act, which impacts organizations with users in the European Union. As AI improves, the expertise and technical skills that proved valuable before need to be continually reevaluated. Organizations that successfully adopt AI and emerging tech will thrive. 


Daily Tech Digest - January 31, 2025


Quote for the day:

“If you genuinely want something, don’t wait for it–teach yourself to be impatient.” -- Gurbaksh Chahal


GenAI fueling employee impersonation with biometric spoofs and counterfeit ID fraud

The annual AuthenticID report underlines the surging wave of AI-powered identity fraud, with rising biometric spoofs and counterfeit ID fraud attempts. The 2025 State of Identity Fraud Report also looks at how identity verification tactics and technology innovations are tackling the problem. “In 2024, we saw just how sophisticated fraud has now become: from deepfakes to sophisticated counterfeit IDs, generative AI has changed the identity fraud game,” said Blair Cohen, AuthenticID founder and president. ... “In 2025, businesses should embrace the mentality to ‘think like a hacker’ to combat new cyber threats,” said Chris Borkenhagen, chief digital officer and information security officer at AuthenticID. “Staying ahead of evolving strategies such as AI deepfake-generated documents and biometrics, emerging technologies, and bad actor account takeover tactics are crucial in protecting your business, safeguarding data, and building trust with customers.” ... Face biometric verification company iProov has identified the Philippines as a particular hotspot for digital identity fraud, with corresponding need for financial institutions and consumers to be vigilant. “There is a massive increase at the moment in terms of identity fraud against systems using generative AI in particular and deepfakes,” said iProove chief technology officer Dominic Forrest.


Cyber experts urge proactive data protection strategies

"Every organisation must take proactive measures to protect the critical data it holds," Montel stated. Emphasising foundational security practices, he advised organisations to identify their most valuable information and protect potential attack paths. He noted that simple steps can drastically contribute to overall security. On the consumer front, Montel highlighted the pervasive nature of data collection, reminding individuals of the importance of being discerning about the personal information they share online. "Think before you click," he advised, underscoring the potential of openly shared public information to be exploited by cybercriminals. Adding to the discussion on data resilience, Darren Thomson, Field CTO at Commvault, emphasised the changing landscape of cyber defence and recovery strategies needed by organisations. Thompson pointed out that mere defensive measures are not sufficient; rapid recovery processes are crucial to maintain business resilience in the event of a cyberattack. The concept of a "minimum viable company" is pivotal, where businesses ensure continuity of essential operations even when under attack. With cybercriminal tactics becoming increasingly sophisticated, doing away with reliance solely on traditional backups is necessary. 


Trump Administration Faces Security Balancing Act in Borderless Cyber Landscape

The borderless nature of cyber threats and AI, the scale of worldwide commerce, and the globally interconnected digital ecosystem pose significant challenges that transcend partisanship. As recent experience makes us all too aware, an attack originating in one country, state, sector, or company can spread almost instantaneously, and with devastating impact. Consequently, whatever the ideological preferences of the Administration, from a pragmatic perspective cybersecurity must be a collaborative national (and international) activity, supported by regulations where appropriate. It’s an approach taken in the European Union, whose member states are now subject to the Second Network Information Security Directive (NIS2)—focused on critical national infrastructure and other important sectors—and the financial sector-focused Digital Operational Resilience Act (DORA). Both regulations seek to create a rising tide of cyber resilience that lifts all ships and one of the core elements of both is a focus on reporting and threat intelligence sharing. In-scope organizations are required to implement robust measures to detect cyber attacks, report breaches in a timely way, and, wherever possible, share the information they accumulate on threats, attack vectors, and techniques with the EU’s central cybersecurity agency (ENISA).


Infrastructure as Code: From Imperative to Declarative and Back Again

Today, tools like Terraform CDK (TFCDK) and Pulumi have become popular choices among engineers. These tools allow developers to write IaC using familiar programming languages like Python, TypeScript, or Go. At first glance, this is a return to imperative IaC. However, under the hood, they still generate declarative configurations — such as Terraform plans or CloudFormation templates — that define the desired state of the infrastructure. Why the resurgence of imperative-style interfaces? The answer lies in a broader trend toward improving developer experience (DX), enabling self-service, and enhancing accessibility. Much like the shifts we’re seeing in fields such as platform engineering, these tools are designed to streamline workflows and empower developers to work more effectively. ... The current landscape represents a blending of philosophies. While IaC tools remain fundamentally declarative in managing state and resources, they increasingly incorporate imperative-like interfaces to enhance usability. The move toward imperative-style interfaces isn’t a step backward. Instead, it highlights a broader movement to prioritize developer accessibility and productivity, aligning with the emphasis on streamlined workflows and self-service capabilities.


How to Train AI Dragons to Solve Network Security Problems

We all know AI’s mantra: More data, faster processing, large models and you’re off to the races. But what if a problem is so specific — like network or DDoS security — that it doesn’t have a lot of publicly or privately available data you can use to solve it? As with other AI applications, the quality of the data you feed an AI-based DDoS defense system determines the accuracy and effectiveness of its solutions. To train your AI dragon to defend against DDoS attacks, you need detailed, real-world DDoS traffic data. Since this data is not widely and publicly available, your best option is to work with experts who have access to this data or, even better, have analyzed and used it to train their own AI dragons. To ensure effective DDoS detection, look at real-world, network-specific data and global trends as they apply to the network you want to protect. This global perspective adds valuable context that makes it easier to detect emerging or worldwide threats. ... Predictive AI models shine when it comes to detecting DDoS patterns in real-time. By using machine learning techniques such as time-series analysis, classification and regression, they can recognize patterns of attacks that might be invisible to human analysts. 


How law enforcement agents gain access to encrypted devices

When a mobile device is seized, law enforcement can request the PIN, password, or biometric data from the suspect to access the phone if they believe it contains evidence relevant to an investigation. In England and Wales, if the suspect refuses, the police can give a notice for compliance, and a further refusal is in itself a criminal offence under the Regulation of Investigatory Powers Act (RIPA). “If access is not gained, law enforcement use forensic tools and software to unlock, decrypt, and extract critical digital evidence from a mobile phone or computer,” says James Farrell, an associate at cyber security consultancy CyXcel. “However, there are challenges on newer devices and success can depend on the version of operating system being used.” ... Law enforcement agencies have pressured companies to create “lawful access” solutions, particularly on smartphones, to take Apple as an example. “You also have the co-operation of cloud companies, which if backups are held can sidestep the need to break the encryption of a device all together,” Closed Door Security’s Agnew explains. The security community has long argued against law enforcement backdoors, not least because they create security weaknesses that criminal hackers might exploit. “Despite protests from law enforcement and national security organizations, creating a skeleton key to access encrypted data is never a sensible solution,” CreateFuture’s Watkins argues.


The quantum computing reality check

Major cloud providers have made quantum computing accessible through their platforms, which creates an illusion of readiness for enterprise adoption. However, this accessibility masks a fatal flaw: Most quantum computing applications remain experimental. Indeed, most require deep expertise in quantum physics and specialized programming knowledge. Real-world applications are severely limited, and the costs are astronomical compared to the actual value delivered. ... The timeline to practical quantum computing applications is another sobering reality. Industry experts suggest we’re still 7 to 15 years away from quantum systems capable of handling production workloads. This extended horizon makes it difficult to justify significant investments. Until then, more immediate returns could be realized through existing technologies. ... The industry’s fascination with quantum computing has made companies fear being left behind or, worse, not being part of the “cool kids club”; they want to deliver extraordinary presentations to investors and customers. We tend to jump into new trends too fast because the allure of being part of something exciting and new is just too compelling. I’ve fallen into this trap myself. ... Organizations must balance their excitement for quantum computing with practical considerations about immediate business value and return on investment. I’m optimistic about the potential value in QaaS. 


Digital transformation in banking: Redefining the role of IT-BPM services

IT-BPM services are the engine of digital transformation in banking. They streamline operations through automation technologies like RPA, enhancing efficiency in processes such as customer onboarding and loan approvals. This automation reduces errors and frees up staff for strategic tasks like personalised customer support. By harnessing big data analytics, IT-BPM empowers banks to personalise services, detect fraud, and make informed decisions, ultimately improving both profitability and customer satisfaction. Robust security measures and compliance monitoring are also integral, ensuring the protection of sensitive customer data in the increasingly complex digital landscape. ... IT-BPM services are crucial for creating seamless, multi-channel customer experiences. They enable the development of intuitive platforms, including AI-driven chatbots and mobile apps, providing instant support and convenient financial management. This focus extends to personalised services tailored to individual customer needs and preferences, and a truly integrated omnichannel experience across all banking platforms. Furthermore, IT-BPM fosters agility and innovation by enabling rapid development of new digital products and services and facilitating collaboration with fintech companies.


Revolutionizing data management: Trends driving security, scalability, and governance in 2025

Artificial Intelligence and Machine Learning transform traditional data management paradigms by automating labour-intensive processes and enabling smarter decision-making. In the upcoming years, augmented data management solutions will drive efficiency and accuracy across multiple domains, from data cataloguing to anomaly detection. AI-driven platforms process vast datasets to identify patterns, automating tasks like metadata tagging, schema creation and data lineage mapping. ... In 2025, data masking will not be merely a compliance tool for GDPR, HIPPA, or CCPA; it will be a strategic enabler. With the rise in hybrid and multi-cloud environments, businesses will increasingly need to secure sensitive data across diverse systems. Specific solutions like IBM, K2view, Oracle and Informatica will revolutionize data masking by offering scale-based, real-time, context-aware masking. ... Real-time integration enhances customer experiences through dynamic pricing, instant fraud detection, and personalized recommendations. These capabilities rely on distributed architectures designed to handle diverse data streams efficiently. The focus on real-time integration extends beyond operational improvements. 


Deploying AI at the edge: The security trade-offs and how to manage them

The moment you bring compute nodes into the far edge, you’re automatically exposing a lot of security challenges in your network. Even if you expect them to be “disconnected devices,” they could intermittently connect to transmit data. So, your security footprint is expanded. You must ensure that every piece of the stack you’re deploying at the edge is secure and trustworthy, including the edge device itself. When considering security for edge AI, you have to think about transmitting the trained model, runtime engine, and application from a central location to the edge, opening up the opportunity for a person-in-the-middle attack. ... In military operations, continuous data streams from millions of global sensors generate an overwhelming volume of information. Cloud-based solutions are often inadequate due to storage limitations, processing capacity constraints, and unacceptable latency. Therefore, edge computing is crucial for military applications, enabling immediate responses and real-time decision-making. In commercial settings, many environments lack reliable or affordable connectivity. Edge AI addresses this by enabling local data processing, minimizing the need for constant communication with the cloud. This localized approach enhances security. Instead of transmitting large volumes of raw data, only essential information is sent to the cloud. 


Daily Tech Digest - January 30, 2025


Quote for the day:

"Uncertainty is not an indication of poor leadership; it underscores the need for leadership." -- Andy Stanley


Doing authentication right

Like encryption, authentication is one of those things that you are tempted to “roll your own” but absolutely should not. The industry has progressed enough that you should definitely “buy and not build” your authentication solution. Plenty of vendors offer easy-to-implement solutions and stay diligently on top of the latest security issues. Authentication also becomes a tradeoff between security and a good user experience. ... Passkeys are a relatively new technology and there is a lot of FUD floating around out there about them. The bottom line is that they are safe, secure, and easy for your users. They should be your primary way of authenticating. Several vendors make implementing passkeys not much harder than inserting a web component in your application. ... Forcing users to use hard-to-remember passwords means they will be more likely to write them down or use a simple password that meets the requirements. Again, it may seem counterintuitive, but XKCD has it right. In addition, the longer the password, the harder it is to crack. Let your users create long, easy-to-remember passwords rather than force them to use shorter, difficult-to-remember passwords. ... Six digits is the outer limit for OTP links, and you should consider shorter ones. Under no circumstances should you require OTPs longer than six digits because they are vastly harder for users to keep in short-term memory.


Augmenting Software Architects with Artificial Intelligence

Technical debt is mistakenly thought of as just a source code problem, but the concept is also applicable to source data (this is referred to as data debt) as well as your validation assets. AI has been used for years to analyze existing systems to identify potential opportunities to improve the quality (to pay down technical debt). SonarQube, CAST SQG and BlackDuck’s Coverity Static Analysis statically analyze existing code. Applitools Visual AI dynamically finds user interface (UI) bugs and Veracode’s DAST to find runtime vulnerabilities in web apps. The advantages of this use case are that it pinpoints aspects of your implementation that potentially should be improved. As described earlier, AI tooling offers to the potential for greater range, thoroughness, and trustworthiness of the work products as compared with that of people. Drawbacks to using AI-tooling to identify technical debt include the accuracy, IP, and privacy risks described above. ... As software architects we regularly work with legacy implementations that they need to leverage and often evolve. This software is often complex, using a myriad of technologies for reasons that have been forgotten over time. Tools such as CAST Imaging visualizes existing code and ChartDB visualizes legacy data schemas to provide a “birds-eye view” of the actual situation that you face.


Keep Your Network Safe From the Double Trouble of a ‘Compound Physical-Cyber Threat'

Your first step should be to evaluate the state of your company’s cyber defenses, including communications and IT infrastructure, and the cybersecurity measures you already have in place—identifying any vulnerabilities and gaps. One vulnerability to watch for is a dependence on multiple security platforms, patches, policies, hardware, and software, where a lack of tight integration can create gaps that hackers can readily exploit. Consider using operational resilience assessment software as part of the exercise, and if you lack the internal know-how or resources to manage the assessment, consider enlisting a third-party operational resilience risk consultant. ... Aging network communications hardware and software, including on-premises systems and equipment, are top targets for hackers during a disaster because they often include a single point of failure that’s readily exploitable. The best counter in many cases is to move the network and other key communications infrastructure (a contact center, for example) to the cloud. Not only do cloud-based networks such as SD-WAN, (software-defined wide area network) have the resilience and flexibility to preserve connectivity during a disaster, they also tend to come with built-in cybersecurity measures.


California’s AG Tells AI Companies Practically Everything They’re Doing Might Be Illegal

“The AGO encourages the responsible use of AI in ways that are safe, ethical, and consistent with human dignity,” the advisory says. “For AI systems to achieve their positive potential without doing harm, they must be developed and used ethically and legally,” it continues, before dovetailing into the many ways in which AI companies could, potentially, be breaking the law. ... There has been quite a lot of, shall we say, hyperbole, when it comes to the AI industry and what it claims it can accomplish versus what it can actually accomplish. Bonta’s office says that, to steer clear of California’s false advertising law, companies should refrain from “claiming that an AI system has a capability that it does not; representing that a system is completely powered by AI when humans are responsible for performing some of its functions; representing that humans are responsible for performing some of a system’s functions when AI is responsible instead; or claiming without basis that a system is accurate, performs tasks better than a human would, has specified characteristics, meets industry or other standards, or is free from bias.” ... Bonta’s memo clearly illustrates what a legal clusterfuck the AI industry represents, though it doesn’t even get around to mentioning U.S. copyright law, which is another legal gray area where AI companies are perpetually running into trouble.


Knowledge graphs: the missing link in enterprise AI

Knowledge graphs are a layer of connective tissue that sits on top of raw data stores, turning information into contextually meaningful knowledge. So in theory, they’d be a great way to help LLMs understand the meaning of corporate data sets, making it easier and more efficient for companies to find relevant data to embed into queries, and making the LLMs themselves faster and more accurate. ... Knowledge graphs reduce hallucinations, he says, but they also help solve the explainability challenge. Knowledge graphs sit on top of traditional databases, providing a layer of connection and deeper understanding, says Anant Adya, EVP at Infosys. “You can do better contextual search,” he says. “And it helps you drive better insights.” Infosys is now running proof of concepts to use knowledge graphs to combine the knowledge the company has gathered over many years with gen AI tools. ... When a knowledge graph is used as part of the RAG infrastructure, explicit connections can be used to quickly zero in on the most relevant information. “It becomes very efficient,” said Duvvuri. And companies are taking advantage of this, he says. “The hard question is how many of those solutions are seen in production, which is quite rare. But that’s true of a lot of gen AI applications.”


U.S. Copyright Office says AI generated content can be copyrighted — if a human contributes to or edits it

The Copyright Office determined that prompts are generally instructions or ideas rather than expressive contributions, which are required for copyright protection. Thus, an image generated with a text-to-image AI service such as Midjourney or OpenAI’s DALL-E 3 (via ChatGPT), on its own could not qualify for copyright protection. However, if the image was used in conjunction with a human-authored or human-edited article (such as this one), then it would seem to qualify. Similarly, for those looking to use AI video generation tools such as Runway, Pika, Luma, Hailuo, Kling, OpenAI Sora, Google Veo 2 or others, simply generating a video clip based on a description would not qualify for copyright. Yet, a human editing together multiple AI generated video clips into a new whole would seem to qualify. The report also clarifies that using AI in the creative process does not disqualify a work from copyright protection. If an AI tool assists an artist, writer or musician in refining their work, the human-created elements remain eligible for copyright. This aligns with historical precedents, where copyright law has adapted to new technologies such as photography, film and digital media. ... While some had called for additional protections for AI-generated content, the report states that existing copyright law is sufficient to handle these issues.


From connectivity to capability: The next phase of private 5G evolution

Faster connectivity is just one positive aspect of private 5G networks; they are the basis of the current digital era. These networks outperform conventional public 5G capabilities, giving businesses incomparable control, security, and flexibility. For instance, private 5G is essential to the seamless connection of billions of devices, ensuring ultra-low latency and excellent reliability in the worldwide IoT industry, which has the potential to reach $650.5 billion by 2026, as per Markets and Markets. Take digital twins, for example—virtual replicas of physical environments such as factories or entire cities. These replicas require real-time data streaming and ultra-reliable bandwidth to function effectively. Private 5G enables this by delivering consistent performance, turning theoretical models into practical tools that improve operational efficiency and decision-making. ... Also, for sectors that rely on efficiency and precision, the private 5G is making big improvements in this area. For instance, in the logistics sector, it connects fleets, warehouses, and ports with fast, low-latency networks, streamlining operations throughout the supply chain. In fleet management, private 5G allows real-time tracking of vehicles, improving route planning and fuel use. 


American CISOs should prepare now for the coming connected-vehicle tech bans

The rule BIS released is complex and intricate and relies on many pre-existing definitions and policies used by the Commerce Department for different commercial and industrial matters. However, in general, the restrictions and compliance obligations under the rule affect the entire US automotive industry, including all-new, on-road vehicles sold in the United States (except commercial vehicles such as heavy trucks, for which rules will be determined later.) All companies in the automotive industry, including importers and manufacturers of CVs, equipment manufacturers, and component suppliers, will be affected. BIS said it may grant limited specific authorizations to allow mid-generation CV manufacturers to participate in the rule’s implementation period, provided that the manufacturers can demonstrate they are moving into compliance with the next generation. ... Connected vehicles and related component suppliers are required to scrutinize the origins of vehicle connectivity systems (VCS) hardware and automated driving systems (ADS) software to ensure compliance. Suppliers must exclude components with links to the PRC or Russia, which has significant implications for sourcing practices and operational processes.


What to know about DeepSeek AI, from cost claims to data privacy

"Users need to be aware that any data shared with the platform could be subject to government access under China's cybersecurity laws, which mandate that companies provide access to data upon request by authorities," Adrianus Warmenhoven, a member of NordVPN's security advisory board, told ZDNET via email. According to some observers, the fact that R1 is open-source means increased transparency, giving users the opportunity to inspect the model's source code for signs of privacy-related activity. Regardless, DeepSeek also released smaller versions of R1, which can be downloaded and run locally to avoid any concerns about data being sent back to the company (as opposed to accessing the chatbot online). ... "DeepSeek's new AI model likely does use less energy to train and run than larger competitors' models," confirms Peter Slattery, a researcher on MIT's FutureTech team who led its Risk Repository project. "However, I doubt this marks the start of a long-term trend in lower energy consumption. AI's power stems from data, algorithms, and compute -- which rely on ever-improving chips. When developers have previously found ways to be more efficient, they have typically reinvested those gains into making even bigger, more powerful models, rather than reducing overall energy usage."


The AI Imperative: How CIOs Can Lead the Charge

For CIOs, AGI will take this to the next level. Imagine systems that don't just fix themselves but also strategize, optimize and innovate. AGI could automate 90% of IT operations, freeing up teams to focus on strategic initiatives. It could revolutionize cybersecurity by anticipating and neutralizing threats before they strike. It could transform data into actionable insights, driving smarter decisions across the organization. The key is to begin incrementally, prove the value and scale strategically. AGI isn't just a tool; it's a game-changer. ... Cybersecurity risks are real and imminent. Picture this: you're using an open-source AI model and suddenly, your system gets hacked. Turns out, a malicious contributor slipped in some rogue code. Sounds like a nightmare, right? Open-source AI is powerful, but has its fair share of risks. Vulnerabilities in the code, supply chain attacks and lack of appropriate vendor support are absolutely real concerns. But this is true for any new technology. With the right safeguards, we can minimize and mitigate these risks. Here's what I recommend: Regularly review and update open-source libraries. CIOs should encourage their teams to use tools like software composition analysis to detect suspicious changes. Train your team to manage and secure open-source AI deployments.