Daily Tech Digest - December 18, 2024

The AI-Powered IoT Revolution: Are You Ready?

AI not only reduces the cost and latency of these operations but also provides actionable intelligence, enabling smarter decisions that enhance business efficiency by preventing downtimes, minimizing losses, improving sales, and unlocking a range of benefits tailored to specific use cases. Building on this synergy, AI on Edge—where AI processes run directly on edge devices such as IoT sensors, cameras, and smartphones rather than relying solely on cloud computing—will see significant adoption by 2025. By processing data locally, edge AI enables real-time decision-making, eliminating delays caused by data transmission to and from the cloud. This capability will transform applications like autonomous vehicles, industrial automation, and healthcare devices, where fast, reliable responses are mission-critical. Moreover, AI on Edge enhances privacy and security by keeping sensitive data on the device, reduces cloud costs and bandwidth usage, and supports offline functionality in remote or connectivity-limited environments. These advantages make it an attractive option for organizations seeking to push the boundaries of innovation while delivering superior user experiences and operational efficiency. 


Key strategies to enhance cyber resilience

To bolster resilience consider developing stakeholder specific playbooks, Wyatt says. Different teams play different roles in incident response from detecting risk, deploying key controls, maintaining compliance, recovery and business continuity. Expect that each stakeholder group will have their own requirements and set of KPIs to meet, she says. “For example, the security team may have different concerns than the IT operations team. As a result, organizations should draft cyber resilience playbooks for each set of stakeholders that provide very specific guidance and ROI benefits for each group.” ... Cyber resilience is as much about the ability to recover from a major security incident as it is about proactively preparing, preventing, detecting and remediating it. That means having a formal disaster recovery plan, doing regular offsite back-ups of all critical systems and testing both the plan and the recovery process on a frequent basis. ... Boards have become very focused on managing risk and have become increasingly fluent in cyber risk. But many boards are surprised that when a crisis occurs, broader operational resilience is not a point of these discussions, according to Wyatt. Bring your board along by having external experts walk through previous events and break down the various areas of impact. 


Smarter devops: How to avoid deployment horrors

Finding security issues post-deployment is a major risk, and many devops teams shift-left security practices by instituting devops security non-negotiables. These are a mix of policies, controls, automations, and tools, but most importantly, ensuring security is a top-of-mind responsibility for developers. ... “Integrating security and quality controls as early as possible in the software development lifecycle is absolutely necessary for a functioning modern devops practice,” says Christopher Hendrich, associate CTO of SADA. “Creating a developer platform with automation, AI-powered services, and clear feedback on why something is deemed insecure and how to fix it helps the developer to focus on developing while simultaneously strengthening the security mindset.” ... “Software development is a complex process that gets increasingly challenging as the software’s functionality changes or ages over time,” says Melissa McKay, head of developer relations at JFrog. Implementing a multilayered, end-to-end approach has become essential to ensure security and quality are prioritized from initial package curation and coding to runtime monitoring.”


How Do We Build Ransomware Resilience Beyond Just Backups?

While email filtering tools are essential, it’s unrealistic to expect them to block every malicious message. As such, another important step is educating your end users on identifying phishing emails and other suspicious content that make it through the filters. User education is one of those things that should be an ongoing effort, not a one-time initiative. Regular training sessions help reinforce best practices and keep security in focus. To complement training, consider using phishing attack simulators. Several vendors offer tools that generate harmless, realistic-looking phishing messages and send them to your users. Microsoft 365 even includes a phishing simulation tool. ... Limiting user permissions is vital because ransomware operates with the permissions of the user who triggers the attack. As such, users should only have access to the resources they need to perform their jobs—no more, no less. If a user doesn’t have access to a specific resource, the ransomware won’t be able to encrypt it. Moreover, consider isolating high-value data on storage systems that require additional authentication. Doing so reduces exposure if ransomware spreads.


Azure Data Factory Bugs Expose Cloud Infrastructure

The Airflow instance's use of default, unchangeable configurations combined with the cluster admin role's attachment to the Airflow runner "caused a security issue" that could be manipulated "to control the Airflow cluster and related infrastructure," the researchers explained. If an attacker was able to breach the cluster, they also could manipulate Geneva, allowing attackers "to potentially tamper with log data or access other sensitive Azure resources," Unit 42 AI and security research manager Ofir Balassiano and senior security researcher David Orlovsky wrote in the post. Overall, the flaws highlight the importance of managing service permissions and monitoring the operations of critical third-party services within a cloud environment to prevent unauthorized access to a cluster. ... Attackers have two ways to gain access to and tamper with DAG files. They could gain write permissions to the storage account containing DAG files by leveraging a principal account with write permissions; or they could use a shared access signature (SAS) token, which grants temporary and limited access to a DAG file. In this scenario, once a DAG file is tampered with, "it lies dormant until the DAG files are imported by the victim," the researchers explained. The second way is to gain access to a Git repository using leaked credentials or a misconfigured repository.


Whatever happened to the three-year IT roadmap?

“IT roadmaps are now shorter, typically not exceeding two years, due to the rapid pace of technological change,” he says. “This allows for more flexibility and adaptability in IT planning.” Kellie Romack, chief digital information officer of ServiceNow, is also shortening her horizon to align with the two- or three-year timeframe that is the norm for her company. Doing so keeps her focused on supporting the company’s overall future strategy but with enough flexibility to adjust along the journey. “That timeframe is a sweet spot that allows us to set a ‘dream big’ strategy with room to be agile, so we can deliver and push the limits of what’s possible,” she says. “The pace of technological change today is faster than it’s ever been, and if IT leaders aren’t looking around the corner now, it’s possible they’ll fall behind and never catch up.” ... “A roadmap is still a useful tool to provide that north star, the objectives and the goals you’re trying to achieve, and some sense of how you’ll get to those goals,” McHugh says. Without that, McHugh says CIOs won’t consistently deliver what’s needed when it’s needed for their organizations, nor will they get IT to an optimal advanced state. “If you don’t have a goal or an outcome, you’re going to go somewhere, we can promise you that, but you’re not going to end up in a specific location,” she adds.


Innovations in Machine Identity Management for the Cloud

Non-human identities are critical components within the digital landscape. They enable machine-to-machine communications, providing an array of automated services that underpin today’s digital operations. However, their growing prevalence means they are also becoming prime targets for cyber threats. Are existing cybersecurity strategies equipped to address this issue? Acting as agile guardians, NHI management platforms offer promising solutions, securing both the identities and their secrets from potential threats and vulnerabilities. By placing equal emphasis on the management of both human and non-human identities, businesses can create a comprehensive cybersecurity strategy that matches the complexity and diversity of today’s digital threats. ... When unsecured, NHIs become hotbeds for cybercriminals who manipulate these identities to procure unauthorized access to sensitive data and systems. For companies regularly transacting in consumer data (like in healthcare or finance), the unauthorized access and sharing of sensitive data can lead to hefty penalties due to non-compliant data management practices. An effective NHI management strategy acts as a pivotal control over cloud security. 


From Crisis to Control: Establishing a Resilient Incident Response Framework for Deployed AI Models

An effective incident response framework for frontier AI companies should be comprehensive and adaptive, allowing quick and decisive responses to emerging threats. Researchers at the Institute for AI Policy and Strategy (IAPS) have proposed a post-deployment response framework, along with a toolkit of specific incident responses. The proposed framework consists of four stages: prepare, monitor and analyze, execute, and recovery and follow up. ... Developers have a variety of actions available to them to contain and mitigate the harms of incidents caused by advanced AI models. These tools offer a variety of response mechanisms that can be executed individually or in combination with one another, allowing developers to tailor specific responses based on the incident's scope and severity. ... Frontier AI companies have recently provided more transparency to their internal policies regarding safety, including Responsible Scaling Policies (RSPs) published by Anthropic, Google DeepMind, and OpenAI. When it comes to responding to post-deployment incidents, all three RSPs lack clear, detailed, and actionable plans for responding to post-deployment incidents. 


We’re Extremely Focused on Delivering Value Sustainably — NIH CDO

Speaking of challenges in her role as the CDO, Ramirez highlights managing a rapidly growing data portfolio. She stresses the importance of fostering partnerships and ensuring the platform’s accessibility to those aiming to leverage its capabilities. One of the central hurdles has been effectively communicating the portfolio’s offerings and predicting data availability for research purposes. She describes the critical need to align funding and partnerships to support delivery timelines of 12 to 24 months, a task that demanded strong leadership from the coordinating center. This dual role of ensuring readiness and delivery has been both a challenge and a success. Ramirez shares that the team has grown more adept at framing research data as a product of their system, ready to meet the needs of collaborators. She also expresses enthusiasm for working with partners to demonstrate the platform’s benefits and efficiencies in advancing research objectives. Sharing AI literacy and upskilling initiatives in the organization, Ramirez mentions building a strong sense of community among data professionals. She highlights efforts to establish a community of practice that brings together individuals working in their federal coordinating center and awardees who specialize in data science and systems.


5 Questions Your Data Protection Vendor Hopes You Don’t Ask

Data protection vendors often rely on high-level analysis to detect unusual activity in backups or snapshots. This includes threshold analysis, identifying unusual file changes, or detecting changes in compression rates that may suggest ransomware encryption. These methods are essentially guesses prone to false positives. During a ransomware attack, details matter. ... Organizations snapshot or back up data regularly, ranging from hourly to daily intervals. When an attack occurs, restoring a snapshot or backup overwrites production data—some of which may have been corrupted by ransomware—with clean data. If only 20% of the data in the backup has been manipulated by bad actors, recovering the full backup or snapshot will result in overwriting 80% of data that did not need restoration. ... Cybercriminals understand that databases are the backbone of many businesses, making them prime targets for extortion. By corrupting these databases, they can pressure organizations into paying ransoms. ... AI is now a mainstream topic, but understanding how an AI engine is trained is critical to evaluating its effectiveness. When dealing with ransomware, it's important that the AI is trained on real ransomware variants and how they impact data.



Quote for the day:

"The essence of leadership is the capacity to build and develop the self-esteem of the workers." -- Irwin Federman

Daily Tech Digest - December 17, 2024

Together For Good: How Humans And AI Can Close The Health Gap

While the potential is immense, AI’s effectiveness in closing the health gap hinges on more than just technological advancement. AI must be deliberately tailored, trained, tested, and targeted to bring out the best in and for people and the planet. This means anchoring AI development and deployment in a holistic understanding of humans, and the environment they evolve in. It also entails the design of ethical frameworks, transdisciplinary collaboration, and 360-degree strategies that systematically bring out the complementarity of AI and NI, including the knowledge, experience, and intuition of humans. ... Closing the gap of preventable health inequalities cannot be achieved by advanced algorithms alone. It requires us to integrate the strengths of artificial intelligence with natural intelligence — the knowledge, ethical judgment, empathy, and cultural understanding of human beings — to ensure that solutions are both effective and just. By anchoring AI in localized insight and human expertise, we can align personal health improvements (micro) with community-led action (meso), informed national policies (macro), and globally coordinated strategies (meta), delivering equitable outcomes in every arena of the organically evolving kaleidoscope that we are part of.


How to Take a Security-First Approach to AI Implementation

Whether it's a third-party tool or an in-house project, thorough research and a clear plan will go a long way toward reducing risks. When developing guidelines for AI implementation, the first step is to match the business case with available tools, remembering that some models are more suited to specific tasks than others. Practicing a Secure by Design strategy from the ground up can future-proof AI implementation. These principles ensure that security is prioritized throughout the entire lifecycle of an AI product. A Secure by Design methodology implements multiple layers of defense against cyberthreats. During the planning stage, the security team's input is critical for a Secure by Design approach. Vendor trust is also vital. Evaluating vendors for trustworthiness and auditing contracts thoroughly, including regular monitoring of updates to vendor terms and conditions, are imperative. It is essential for data quality to be assessed for metrics like accuracy, relevance, and completeness.... Keeping security at the forefront from the get-go confers advantages, especially as tools and risks evolve. Safer AI is on the horizon as more users adhere to best practices through regulatory frameworks, international collaborations, and security-first use cases. 


Data Governance in DevOps: Ensuring Compliance in the AI Era

Implementing effective CI/CD pipeline governance in the age of AI requires a multifaceted approach. It starts with establishing clear policies outlining compliance requirements, security standards, and ethical guidelines for AI development. These policies should be embedded into the pipeline through automated checks and gates. Leveraging advanced automation tools for continuous compliance checking throughout the pipeline is essential. These tools can scan code for vulnerabilities, check for adherence to coding standards, and even analyze AI models for potential biases or unexpected behaviors. Robust version control and change management processes are also crucial components of pipeline governance. They ensure that every change to the codebase or AI model is tracked, reviewed, and approved before progressing through the pipeline. We can't forget logging and auditing. Comprehensive logging and monitoring of all pipeline activities provide the necessary audit trails for compliance demonstration and post-incident analysis. In the context of AI, this extends to monitoring deployed models for performance drift or unexpected behaviors, ensuring ongoing compliance post-deployment. 


Top 10 Cloud Data Center Stories of 2024

If you work in the data center industry, you may use term on-premise (or on-prem) frequently. But have you ever stopped to wonder how the phrase entered the data center lexicon – or considered why on-premise doesn’t make grammatical sense? In a nutshell, the answer is that it should be on-premises – note the s on the end – because premise and premises are different words. If not, you’ll be enlightened by our coverage of the history of the term on-prem and why it has long irked certain CIOs. ... The more complex your cloud architecture becomes, the harder it is to identify security risks and other misconfigurations. That’s why the ability to automate security assessments is growing increasingly important. But how good are the solutions that cloud providers offer for this purpose? To find out, we took a close look at compliance reporting tools from Azure and GCP. The takeaway was that these solutions can automate much of the work necessary to identify misconfigurations that could trigger compliance violations, but they’re no substitute for human experts. ... What was less often discussed – but equally important – is the role of edge infrastructure in AI. That’s what we focused on in our report about edge AI, meaning AI workloads that run at the network edge instead of in traditional cloud data centers.


Clop Ransomware Takes Responsibility for Cleo Mass Exploits

Whether or not Clop is actually responsible for attacks targeting various types of Cleo's MFT software couldn't be confirmed. Separately, on Dec. 10, British cybersecurity expert Kevin Beaumont reported having evidence that the ransomware group Termite possessed a zero-day exploit for vulnerabilities in the Cleo products. Security experts said both groups may well have been involved, either separately or together. "Although Cl0p posted a message on their website, this is not hard evidence pointing to a single threat group's involvement. Therefore, any discussion of whether Termite or Cl0p are behind this exploit is speculation until proven with other indicators/evidence," said Christiaan Beek, senior director of threat analytics at cybersecurity firm Rapid7. "We have seen Cl0p utilize complex chains similar to this vulnerability in multiple file transfer use cases before, such as MOVEit and Accellion FTA in 2021," Beek added.  ... The latest attacks appear to target in part CVE-2024-50623, an unrestricted file upload vulnerability in the managed file transfer products Cleo Harmony, VLTrader and LexiCom. Exploiting the vulnerability enables attackers to remotely execute code with escalated privileges.


Balancing security and user experience to improve fraud prevention strategies

There may not be one right way of handling the balance of security and user-friendly customer experience. Different institutions and their customers will have different needs, and processes might vary somewhat. But overall, there should be clear, easy-to-follow standards and checkpoints built into whatever financial institutions do. For instance, some banks or credit card companies may allow customers to institute their own stop gap for purchases over a certain amount, which may reduce the incentive for relatively large-scale fraud. These companies could also introduce some level of personalization into the processes, like how a credit or debit card could be easily turned on and turned off by customers themselves via an app or site. ... Meanwhile, it seems like barely a day goes by when there’s not some coverage of fraud or a release of personal info via hacking from some corporation, and some speculate increasingly advanced technology may make it easier for those who want to perpetrate fraud. With this in mind, there may be a greater emphasis placed on enhancing security and experimentation in what different institutions do to find what works best and to have a process in place that allows customers to have confidence in their banks and credit card companies.


Generative AI Is Just the Beginning — Here’s Why Autonomous AI is Next

Embracing this technology will unlock significant opportunities to improve organizational efficiency and accuracy. But before we dive into this, let us start with some definitions. Autonomous AI refers to systems that can perform tasks without human intervention. In contrast, generative AI systems focus on content creation based on existing data. What sets autonomous AI apart is its ability to self-manage. Understanding this difference is crucial, enabling organizations to use AI for more complex operations like predictive maintenance and resource optimization. ... The first step in successfully integrating autonomous AI into your organization is implementing robust data governance frameworks to support these advanced systems. Establish clear data privacy and transparency guidelines to ensure autonomous AI operates within ethical boundaries. It’s crucial to incorporate technical controls that prevent the AI from making reckless decisions, aligning its actions with your organizational values. ... When exploring the future of autonomous AI within your organization, it’s crucial to monitor and evaluate your autonomous AI systems regularly. Continuous assessment allows you to understand how the AI is performing and identify potential improvement areas.


Privacy by design approach drives business success in today’s digital age

Businesses that adhere to data privacy practices validate the upkeep of customer data and data privacy, earning them a stronger brand reputation. They should also ensure privacy is embedded in the organisation’s framework across the technology, products, and services, which is also known as Privacy by Design (PbD). ... The PbD framework was developed by Dr. Ann Cavoukian, Information & Privacy Commissioner of Ontario jointly with the Dutch Data Protection Authority and the Netherlands Organisation for Applied Scientific Research in 1995. It aimed to cultivate and embed privacy defences to safeguard data in the design process of a product, service, or system. Privacy becomes the default setting built at the very beginning rather than an afterthought. This framework is founded on seven core principles: being proactive and not reactive, having privacy as the default setting, having privacy embedded into design, full functionality, end-to-end security, visibility and transparency, and respect for user privacy. ... The PbD approach which is proactive indicates the company’s commitment to protecting the customer’s sensitive personal information. PbD enables companies to have personalised engagement with customers while respecting their privacy preferences.


Top 10 cybersecurity misconfigurations: Nail the setup to avoid attacks

Despite the industry-wide buzz about things like zero-trust, which is rooted in concepts such as least-privileged access control, this weakness still runs rampant. CISA’s publication calls out excessive account privileges, elevated service accounts, and non-essential use of elevated accounts. Anyone who has worked in IT or cyber for some time knows that many of these issues can be traced back to human behavior and the general demands of working in complex environments. ... Another fundamental security control that makes an appearance is the need to segment networks, a practice again that ties to the broader push for zero trust. By failing to segment networks, organizations are failing to establish security boundaries between different systems, environments, and data types. This allows malicious actors to compromise a single system and move freely across systems without encountering friction and additional security controls and boundaries that could impede their nefarious activities. The publication specifically calls out challenges where there is a lack of segmentation between IT and OT networks, putting OT networks at risk, which have real-world implications around security and safety in environments such as industrial control systems.


Why Indian enterprises are betting big on hybrid multi-cloud strategies?

The multi-cloud strategy in India is deeply intertwined with the country’s broader digital transformation initiatives. The Government of India’s Digital India program and initiatives like the National Cloud Initiatives are providing a robust framework for cloud adoption. ... The importance of edge computing is growing, and the rollout of 5G is opening up new possibilities for distributed cloud architectures. Telecom titans like Jio and Airtel are investing substantially in cloud-native infrastructure, creating ripple effects throughout industries. On the other hand, startup ecosystems play a crucial role too. Bangalore, often called the Silicon Valley of India, has become a hotbed for cloud-native technologies. Companies and numerous cloud consulting firms are developing cutting-edge multi-cloud solutions that are gaining global recognition. Foreign investments are pouring in. Major cloud providers like AWS, Microsoft Azure, and Google Cloud are expanding their infrastructure in India, with dedicated data centers that meet local compliance requirements. This local presence is critical for enterprises concerned about data sovereignty and latency.



Quote for the day:

"You aren’t going to find anybody that’s going to be successful without making a sacrifice and without perseverance." -- Lou Holtz

Daily Tech Digest - December 16, 2024

What IT hiring looks like heading into 2025

AI isn’t replacing jobs so much as it is reshaping the nature of work, said Elizabeth Lascaze, a principal in Deloitte Consulting’s Human Capital practice. She, too, sees evidence that entry-level roles focused on tasks like note-taking or basic data analysis are declining as organizations seek more experienced workers for junior positions. “Today’s emerging roles require workers to quickly leverage data, generate insights, and solve problems,” she said, adding that those skilled in using AI, such as cybersecurity analysts applying AI for threat detection, will be highly sought after. Although the adoption of AI has led to some “growing pains,” many workers are actually excited about it, Lascaze said, with most employees believing it will create new jobs and enhance their careers. “Our survey found that just 24% of early career workers and 14% of tenured workers fear their jobs will be replaced by AI,” Lascaze said. “Tenured workers are more likely to lead organizational strategy, so they may prioritize AI’s potential to improve efficiency, sophistication, and work quality in existing roles rather than AI’s potential to eliminate certain positions. “These workers reported being slightly more focused on building AI fluency than early-career employees,” Lascaze said. 


The Future of AI (And Travel) Relies on Synthetic Data

Synthetic data enhances accuracy and fairness in AI models as organic data can be biased or unbalanced, leading to ML models failing to represent diverse populations accurately. With synthetic data, researchers can create datasets that more accurately reflect the demographics they intend to serve, thereby minimizing biases and improving overall model robustness. ... Synthetic data can be a double-edged sword. While it addresses data privacy and availability challenges, it can inadvertently carry or magnify biases embedded in the original dataset. When source data is flawed, those imperfections can cascade into the synthetic version, skewing results — a critical concern in high-stakes domains like healthcare and finance, where precision and fairness are paramount. To counteract this, having a human in the loop is super important. While there’s a temptation to use synthetic data to fill in every gap for better accuracy and fairness, we understood that running synthetic searches for every flight combination possible globally for our price tracking and predictions feature could overwhelm our booking system and impact real travelers organically searching for flights. Synthetic data has limitations that go beyond bias. 


9 Cloud Service Adoption Trends

Most organizations are building modern cloud computing applications to enable greater scalability while reducing cost and consumption costs. They’re also more focused on the security and compliance of cloud systems and how providers are validating and ensuring data protection. “Their main focus is really around cost, but a second focus would be whether providers can meet or exceed their current compliance requirements,” says Will Milewski, SVP of cloud infrastructure and operations at content management solution provider Hyland. ... There’s a fundamental shift in cloud adoption patterns, driven largely by the emergence of AI and ML capabilities. Unlike previous cycles focused primarily on infrastructure migration, organizations are now having to balance traditional cloud ROI metrics with strategic technology bets, particularly around AI services. According to Kyle Campos, chief technology and product officer at cloud management platform provider CloudBolt Software, this evolution is being catalyzed by two major forces: First, cloud providers are aggressively pushing AI capabilities as key differentiators rather than competing on cost or basic services. Second, organizations are realizing that cloud strategy decisions today have more profound implications for future innovation capabilities than ever before.


We’ve come a long way from RPA: How AI agents are revolutionizing automation

As the AI ecosystem evolves, a significant shift is occurring toward vertical AI agents — highly specialized AI systems designed for specific industries or use cases. As Microsoft founder Bill Gates said in a recent blog post: “Agents are smarter. They’re proactive — capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. “ Unlike traditional software-as-a-service (SaaS) models, vertical AI agents do more than optimize existing workflows; they reimagine them entirely, bringing new possibilities to life. ... The most profound shift in the automation landscape is the transition from RPA to multi-agent AI systems capable of autonomous decision-making and collaboration. According to a recent Gartner survey, this shift will enable 15% of day-to-day work decisions to be made autonomously by 2028. These agents are evolving from simple tools into true collaborators, transforming enterprise workflows and systems. ... As AI agents progress from handling tasks to managing workflows and entire jobs, they face a compounding accuracy challenge. Each additional step introduces potential errors, multiplying and degrading overall performance. 


8 reasons why digital transformations still fail

“People got really excited about, ‘We’re going to transform,’” Woerner says, but she believes part of the problem lies with leaders who “didn’t have the discipline to make the hard choices early on” to get employee buy-in. Ranjit Varughse, CIO of automotive paint and equipment firm Wesco Group, agrees. “The first challenge is getting digital transformation buy-in from teams at the outset. People are creatures of habit, making many hesitant to change their existing systems and processes,” he says. “Without a clear change management strategy to get a team aligned, ERP implementations in particular can be slow, stall, or even fail entirely.” ... Digital transformation isn’t a technology problem, it’s about understanding how people actually work, not how we think they should work, Wei says. “At PropertySensor, we scrapped our first version after realizing real estate agents needed mobile-first solutions, not desktop dashboards,” he says. ... “People, process, and technology” is a common phrase technology leaders use when discussing the critical elements of a transformation. “But the real focus should be people, people, people,” echoes Megan Williams, vice president of global technology strategy and transformation at TransUnion.


How companies can address bias and privacy challenges in AI models

Companies understand that AI adoption is existential to their survival, with the winners of tomorrow being determined by their ability to harness AI effectively. Furthermore, they understand that their brand’s reputation is one of their most valuable assets. Missteps with AI—especially in mission-critical contexts (think of a trading algorithm going awol, a breach of user privacy, or a failure to meet safety standards)—can erode public trust and harm a company’s bottom line. This could have dire consequences. With a company’s competitiveness and potentially its very survival at stake, AI governance becomes a business imperative that they cannot afford to ignore. ... Certainly, we see a lot of activity from the government – both at the state and federal levels – which is creating a fragmented approach. We also see leading companies who understand that adopting AI is crucial to their future and want to move fast. They are not waiting for the regulatory environment to settle and are taking a leadership position in adopting responsible AI principles to safeguard their brand reputations. So, I believe companies will act intelligently out of self-interest to accelerate their AI initiatives and increase business returns. 


Ensuring AI Accountability Through Product Liability: The EU Approach and Why American Businesses Should Care

In terms of a substantive law regulating AI (which can be the basis of the causality presumption under the proposed AI Liability Directive), the European Union’s Artificial Intelligence Act (AI Act) entered into force on August 1, 2024, becoming the first comprehensive legal framework for AI globally. The AI Act applies to providers and developers of AI systems that are marketed or used within the EU (including free-to-use AI technology), regardless of whether those providers or developers are established in the EU or a separate country. The EU AI Act sets forth requirements and obligations for developers and deployers of AI systems in accordance with risk-based classification system and a tiered approach to governance, which are two of the most innovative features of the AI Act. The Act classifies AI applications into four risk categories: unacceptable risk, high risk, limited risk, and minimal or no risk. AI systems deemed to pose an unacceptable risk, such as those that violate fundamental rights, are outright banned. ... High-risk AI systems, which include areas such as health care, law enforcement, and critical infrastructure, will face stricter regulatory scrutiny and must comply with rigorous transparency, data governance, and safety protocols. 


Agentic AI is evolving into specialised assistants, enabling the workforce to focus on value-adding tasks

A structured discovery approach is required to identify high impact areas for AI adoption rather than siloed use-cases. Infosys Topaz comprises verticalised blueprints, industry catalogues and strategic AI value map analysis capabilities. We have created playbooks for industries that lay out a structured roadmap to embed and mature GenAI into core processes and operations and across the IT landscape. This includes the right use-cases across the value stream spanning operations, customer experience, research and development, etc. As part of our Responsible AI by Design approach, we implement robust technical and process guardrails to ensure privacy and security. These include impact assessments, audits, automated policy enforcement, monitoring tools, and runtime safeguards to filter inputs and outputs for generative AI. We also use red-teaming and advanced testing tools to identify vulnerabilities and fortify AI models. Additionally, we employ privacy-preserving techniques such as Homomorphic Encryption and Secure Multi-Party Computation to enhance the security and resilience of our AI solutions. ... AI-driven monitoring tools detect inefficiencies in IT infrastructure, leveraging predictive analytics and forecasting techniques to improve utilisation in real time.


Security leaders top 10 takeaways for 2024

One of the most significant new rules, which has received the lion’s share of press attention, is the ‘materiality’ component, or the need to report “material” cybersecurity incidents to the SEC within four business days of discovery. At issue is whether the incident led to significant risk to the organization and its shareholders. If so, it’s defined as material and must be reported within four days of this determination being made (not its initial discovery). “Materiality extends beyond quantitative losses, such as direct financial impacts, to include qualitative aspects, like reputational damage and operational disruptions,” he says. McGladrey says the SEC’s materiality guidance underscores the importance of investor protection in relation to cybersecurity events and, if in doubt, the safest path is reporting. “If a disclosure is uncertain, erring on the side of transparency safeguards shareholders,” he tells CSO. ... As a virtual or fractional CISO service, Sage has observed startups engaging vCISO services earlier, in pre-seed and Series A stage and, in some cases, before they’ve finalized their minimum viable product. “Small technology consulting and boutique software development groups are looking for ISO 27001 certifications to ensure they can continue serving their larger customers,” she tells CSO.


Emotional intelligence in IT management: Impact, challenges, and cultural differences

While delivering results is the primary goal of any leader, you can’t forget that you’re managing people, not machines. Emotional intelligence helps balance the need for productivity with fairness and empathy. One way to illustrate this balance is through handling difficult conversations about career moves. Managing a team of over 100 support specialists for several years gave me the opportunity to conduct an interesting experiment. Many employees tend to hide the fact that they are exploring job opportunities elsewhere until the last minute. This creates unnecessary tension and can lead to higher turnover. However, if a manager removes the stigma around job interviews and treats them as part of market research, it encourages open communication. ... Emotionally intelligent managers possess the ability to identify the core of a conflict without letting it escalate. Attempting to gather every single piece of information is not always helpful. Instead, managers should focus on resolving conflicts, as often the solution is already within the team. This does not mean conducting surveys or asking for feedback from each person, as delicate situations require a more refined approach. A manager should observe, analyze, and extract the most significant points quickly and intuitively, enabling conflict resolution before it grows into a larger issue.



Quote for the day:

“Things come to those who wait, but only the things left by those who hustle” -- Abraham Lincoln

Daily Tech Digest - December 15, 2024

Navigating the Future: Cloud Migration Journeys and Data Security

To meet the requirements of DORA and future regulations, business leaders must adopt a proactive and reflexive approach to cybersecurity. Strong cyber hygiene practices must be integrated throughout the business, ensuring consistency in how data is handled, protected, and accessed. It is important to note at this juncture that enhanced data security isn’t purely focused on compliance. Modern IT researchers and business analysts have been studying what differentiates the most innovative companies for decades and have identified two key principles that help businesses achieve this: Unified Control and Federated Protection. ... Advancements in data security technologies are reshaping the cloud landscape, enabling faster and more secure migrations. Privacy Enhancing Technologies (PETs) like dynamic data masking (DDM), tokenisation, and format-preserving encryption help businesses anonymise sensitive data, reducing breach risks while keeping cloud adoption fast and flexible. However, as businesses will inevitably adopt multi-cloud strategies to support their processes, they will require interoperable security platforms that can seamlessly integrate across multiple cloud environments. 


Maximizing AI Payoff in Banking Will Demand Enterprise-Level Rewiring

Beyond thinking in broad strokes of AI’s applicability in the bank, McKinsey holds that an institution has to be ready to adopt multiple kinds of AI set up in a way to work with each other. This includes analytical AI — the types of AI that some banks have been using for years for credit and portfolio analysis, for instance — and generative AI, in the forms of ChatGPT and others, as well as “agentic AI.” In general, agentic AI uses AI that applies other types of AI to perform analyses and solve problems as a “virtual coworker.” It’s a developing facet of AI and, as described in the report, is meant to manage multiple AI inputs, rather than having a bank lean on one model. ... “You measure the outcomes you want to achieve and at the end of the pilot you will typically come out with a very good understanding of how to scale it,” Giovine says. Over six to 12 months after the pilot, “you can scale it over a good chunk of the domain.” And here, the consultant says, is where the bonus kicks in: Often a good deal of the work done to bring AI thinking to one domain can be re-used. This applies to both the business thinking and technology.


Synthetic data has its limits — why human-sourced data can help prevent AI model collapse

The more AI-generated content spreads online, the faster it will infiltrate datasets and, subsequently, the models themselves. And it’s happening at an accelerated rate, making it increasingly difficult for developers to filter out anything that is not pure, human-created training data. The fact is, using synthetic content in training can trigger a detrimental phenomenon known as “model collapse” or “model autophagy disorder (MAD).” Model collapse is the degenerative process in which AI systems progressively lose their grasp on the true underlying data distribution they’re meant to model. This often occurs when AI is trained recursively on content it generated, leading to a number of issues:Loss of nuance: Models begin to forget outlier data or less-represented information, crucial for a comprehensive understanding of any dataset. Reduced diversity: There is a noticeable decrease in the diversity and quality of the outputs produced by the models. Amplification of biases: Existing biases, particularly against marginalized groups, may be exacerbated as the model overlooks the nuanced data that could mitigate these biases. Generation of nonsensical outputs: Over time, models may start producing outputs that are completely unrelated or nonsensical.


The Macy’s accounting disaster: CIOs, this could happen to you

It wasn’t outright fraud or theft. But that’s merely because the employee didn’t try to steal. But the same lax safeguards that allowed expense dollars to be underreported could have just as easily allowed actual theft. “What will happen when someone actually has motivation to commit fraud? They could have just as easily kept the $150 million,” van Duyvendijk said. “They easily could have committed mass fraud without this company knowing. (Macy’s) people are not reviewing manual journals very carefully.” ... “It’s true that most ERPs are not designed to catch erroneous accounting,” she said. “However, there are software tools that allow CFOs and CAOs to create more robust controls around accounting processes and to ensure the expenses get booked to the correct P&L designation. Initiating, approving, recording transactions, and reconciling balances are each steps that should be handled by a separate member of the team. There are software tools that can assist with this process, such as those that enable use of AI analytics to assess actual spend and compare that spend to your reported expenses. Some such tools use AI to look for overriding journal entries that reverse expense items and move those expenses to a balance sheet account.”


Digital Nomads and Last-Minute Deals: How Online Data Enables Offline Adventures

Along with remote work preference, the pandemic boosted another trend. Many emerged from it more spontaneous, seeing how travel can be restricted so suddenly and for so long. Even before, millennials were ready to embrace impromptu travel, with half of them having planned last-minute vacations. For digital nomads, last-minute deals for flights and hotels are even more important as they need to adapt to changing situations quickly to strike a work-life balance on the go. This opens opportunities for websites to offer services that assist digital nomads in finding the best last-minute deals. ... Many of the first successful startups by the nomads were teaching about the nomadic lifestyle or connecting the nomads with each other. For example, some websites use APIs to aggregate data about the suitability of cities for remote work. Drawing data from various online sources in real time, such platforms can constantly provide information relevant to traveling remote workers. And the relevant information is very diverse. The aforementioned travel and hospitality prices and deals alone generate volumes of data every second. Then, there is information about security and internet stability in various locations, which requires reliable and constantly updated reviews.


It’s not what you know, it’s how you know you know it

Developers and technologists have been learning to code using online media such as blogs and videos increasingly in the last four years according to the Stack Overflow Developer Survey–60% in 2021 increased to 82% in 2024. The latest resource that developers could utilize for learning is generative AI which is emerging as a key tool that offers real-time problem-solving assistance, personalized coding tips, and innovative ways to enhance skill development seamlessly integrated within daily workflows. There has been a lot of excitement in the world of software development about AI’s potential to increase the speed of learning and access to more knowledge. Speculation abounds as to whether learning will be helped or hindered by AI advancement. Our recent survey of over 700 developers and technologists reveals the process of knowing things is just that—a process. New insights about how the Stack Overflow community learns demonstrate that software professionals prefer to gain and share knowledge through hands-on interactions. Their preferences for sourcing and contributing to groups or individuals (or AI) provides color on the evolving landscape of knowledge work.


What is data science? Transforming data into value

While closely related, data analytics is a component of data science, used to understand what an organization’s data looks like. Data science takes the output of analytics to solve problems. Data scientists say that investigating something with data is simply analysis, so data science takes analysis a step further to explain and solve problems. Another difference between data analytics and data science is timescale. Data analytics describes the current state of reality, whereas data science uses that data to predict and understand the future. ... The goal of data science is to construct the means to extract business-focused insights from data, and ultimately optimize business processes or provide decision support. This requires an understanding of how value and information flows in a business, and the ability to use that understanding to identify business opportunities. While that may involve one-off projects, data science teams more typically seek to identify key data assets that can be turned into data pipelines that feed maintainable tools and solutions. Examples include credit card fraud monitoring solutions used by banks, or tools used to optimize the placement of wind turbines in wind farms.


Tech Giants Retain Top Spots, Credit Goes to Self-Disruption

Companies today know they are not infallible in the face of evolving technologies. They are willing to disrupt their tried and tested offerings to fully capitalize on innovation. This ability of "dual transformation" - sustaining as well as reinventing the core business - is a hallmark of successful incumbents. It enables companies to optimize their existing operations while investing in the future, ensuring they are not caught flat-footed when the next wave of disruption hits. And because they have capital, talent and resources, they are already ahead of newer players. ... There is also a core cultural shift to encourage innovative thinking. Amazon implemented its famous "two-pizza teams" approach, where small, autonomous groups work on focused projects with minimal bureaucracy. Launched during the dot-com boom, Amazon subsequently ventured into successful innovations, including Prime, AWS and Alexa. Google's longstanding "20% time" policy, which allows employees to dedicate a portion of their workweek to passion projects, resulted in breakthrough products including AdSense and Google News. Drawing from decades of experience, these organizations know the whole is greater than the sum of its parts.


The Power of the Collective Purse: Open-Source AI Governance and the GovAI Coalition

Collaboration and transparency often go hand in hand. One of the most significant outcomes of the GovAI Coalition’s work is the development of open-source resources that benefit not only coalition members but also vendors and uninvolved governments. By pooling resources and expertise, the coalition is creating a shared repository of guidelines, contracting language, and best practices that any government entity can adapt to their specific needs. This collaborative, open-source initiative greatly reduces the transaction costs for government agencies, particularly those that are understaffed or under-resourced. While the more expansive budgets and technological needs of larger state and local governments sometimes lead to outsized roles in Coalition standard-setting, this allows smaller local governments, which may lack the capacity to develop comprehensive AI governance frameworks independently, to draw on the Coalition’s collective institutional expertise. This crowd-sourced knowledge ensures that even the smallest agencies can implement robust AI governance policies without having to start from scratch.


Redefining software excellence: Quality, testing, and observability in the age of GenAI

Traditional test automation has long relied on rigid, code-based frameworks, which require extensive scripting to specify exactly how tests should run. GenAI upends this paradigm by enabling intent-driven testing. Instead of focusing on rigid, script-heavy frameworks, testers can define high-level intents, like “Verify user authentication,” and let the AI dynamically generate and execute corresponding tests. This approach reduces the maintenance overhead of traditional frameworks, while aligning testing efforts more closely with business goals and ensuring broader, more comprehensive test coverage. ... QA and observability are no longer siloed functions. GenAI creates a semantic feedback loop between these domains, fostering a deeper integration like never before. Robust observability ensures the quality of AI-driven tests, while intent-driven testing provides data and scenarios that enhance observability insights and predictive capabilities. Together, these disciplines form a unified approach to managing the growing complexity of modern software systems. By embracing this symbiosis, teams not only simplify workflows but raise the bar for software excellence, balancing the speed and adaptability of GenAI with the accountability and rigor needed to deliver trustworthy, high-performing applications.



Quote for the day:

"Success is not the key to happiness. Happiness is the key to success. If you love what you are doing, you will be successful." -- Albert Schweitzer

Dily Tech Digest - December 14, 2024

How Conscious Unbossing Is Reshaping Leadership And Career Growth

Conscious unbossing presents both challenges and opportunities for organizations. On the one hand, fewer employees pursuing traditional leadership tracks can create gaps in decision-making, team development, and operational consistency. On the other hand, organizations that embrace unbossing as a cultural strategy can thrive. Novartis is a prime example, fostering a culture of curiosity and empowerment that drives both engagement and innovation. By breaking down rigid hierarchies, they’ve shown how unbossed leadership can be a strategic advantage rather than a liability. ... Conscious unbossing is transforming how we think about leadership and career progression. Organizations that adapt by redefining leadership roles, offering flexible career pathways, and building cultures rooted in curiosity and empathy will thrive. Companies like Novartis, Patagonia, and Microsoft have proven that unbossed leadership isn’t a limitation—it’s an opportunity to innovate and grow. By embracing this shift, businesses can create resilient, dynamic teams and ensure leadership continuity. However, this approach also comes with challenges that organizations must navigate to ensure its success. One potential downside is the risk of role ambiguity. 


Why agentic AI and AGI are on the agenda for 2025

We’re ready to move beyond basic now, and what we’re seeing is an evolution towards a digital co-worker – an agent. Agents are really those digital coworkers, our friends, that are going to help us to do research, write a text, and then publish it somewhere. So you set the goal – let’s say, run research on some telco and networking predictions for next year – and an agent would do the research and run it by you, and then push it to where it needs to go to get reviewed, edited, and more. You would provide it with an outcome, and it will choose the best path to get to that outcome. Right now, Chatbots are really an enhanced search engine with creative flair. But Agentic AI is the next stage of evolution, and will be used across enterprises as early as next year. This will require increased network bandwidth and deterministic connectivity, with compute closer to users – but these essentials are already being rolled out as we speak, ensuring Agentic AI is firmly on the agenda for enterprises in the new year. ... Amid the AI rush, we’ve been focused on the outcomes rather than the practicalities of how we’re accessing and storing the data being generated. But concerns are emerging. Where does the data go? Does it disappear in a big cloud? Concerns are obviously being raised in many sectors, particularly in the medical space in which, medical records cannot leave state/national borders. 


Robust Error Detection to Enable Commercial Ready Quantum Computers from Quantum Circuits

Quantum Circuits has the goal of first making components that are correct and then scaling the systems. This is part of the larger goal of making commercial ready quantum computers. What is meant by commercial ready quantum computers ? This means you can bet your business or company on the results of a quantum computer. Just as we rely today on servers and computers than provide services via cloud computer systems. Being able trust and rely on quantum computers means systems that are repeatable, predictable and trusted. They have built an 8 qubit system and enterprise customers have been using them. Customers have said that using error mitigation and error detection can enable them to get far more utility from Quantum Circuits than competing quantum computers. Error suppression and error mitigation are common techniques and have intensive efforts by most quantum computer companies and the entire Quantum computer community. Quantum Circuits’ error-detecting dual-rail qubits innovation allows errors to be detected and corrected first to avoid disrupting performance at scale. This system will enable a 10x reduction in resource requirements for scalable error correction.


5 reasons why Google's Trillium could transform AI and cloud computing - and 2 obstacles

Trillium is designed to deliver exceptional performance and cost savings, featuring advanced hardware technologies that set it apart from earlier TPU generations and competitors. Key innovations include doubled High Bandwidth Memory (HBM), which improves data transfer rates and reduces bottlenecks. Additionally, as part of its TPU system architecture, it incorporates a third-generation SparseCore that enhances computational efficiency by directing resources to the most important data paths. There is also a remarkable 4.7x increase in peak compute performance per chip, significantly boosting processing power. These advancements enable Trillium to tackle demanding AI tasks, providing a strong foundation for future developments and applications in AI. ... Trillium is not just a powerful TPU; it is part of a broader strategy that includes Gemini 2.0, an advanced AI model designed for the "agentic era," and Deep Research, a tool to streamline the management of complex machine learning queries. This ecosystem approach ensures that Trillium remains relevant and can support the next generation of AI innovations. By aligning Trillium with these advanced tools and models, Google is future-proofing its AI infrastructure, making it adaptable to emerging trends and technologies in the AI landscape.


How Industries Are Using AI Agents To Turn Data Into Decisions

In the past, this required hours of manual work to standardize the various file formats — such as converting PDFs to spreadsheets — and reconcile inconsistencies like differing terminologies for revenue or varying date formats. Today, AI agents automate these tasks with human supervision, adapting to schema changes dynamically and normalizing data as it comes in. ... While extracting insights is vital, the ultimate goal of any data workflow is to drive action. Historically, this has been the weakest link in the chain. Insights often remain in dashboards or reports, waiting for human intervention to trigger action. By the time decisions are made, the window of opportunity may already have closed. AI agents, with humans in the loop, are expediting the entire cycle by bridging the gap between analysis and execution. ... The advent of AI agents signals a new era in data management — one where workflows are no longer constrained by team bandwidth or static processes. By automating ETL, enabling real-time analysis and driving autonomous actions, these agents, with the right guardrails and human supervision, are creating dynamic systems that adapt, learn and improve over time.


The Power of Stepping Back: How Rest Fuels Leadership and Growth

It's essential to fully step back from work sometimes, especially when balancing the demands of running a business and being a parent. I find that I'm most energised and focused in the mornings, so I like to use that time to read, take notes, and reflect on different aspects of the business - whether it's strategy, growth, or new ideas. It's my creative time to think deeply and plan ahead. ... It's also important to carve out weekend days when I can fully switch off. This time away from the business helps me come back refreshed and with a clearer perspective. Even though I aim to disconnect, Lee (my husband and co-founder) and I often find ourselves discussing business because it's something we're both passionate about - strangely enough, those conversations don't feel like work. ... Stepping back from the day-to-day grind gave me the mental space to realise that while small tests have their place, they can sometimes limit your potential by encouraging cautious, safe moves. By contrast, thinking bigger and aiming for more ambitious goals has opened up a new level of creativity and opportunity. This shift in mindset has been a game-changer for us - it's unlocked several key growth areas, including new product opportunities and ways to engage with customers. 


Navigating the Future of Big Data for Business Success

Big data is no longer just a tool for competitive advantage – it has become the backbone of innovation and operational efficiency across key industries, driving billion-dollar transformations. ... The combination of artificial intelligence and big data, especially through machine learning (ML), is pushing the boundaries of what’s possible in data analysis. These technologies automate complex decision-making processes and uncover patterns that humans might miss. Google’s DeepMind AI, for instance, made a breakthrough in medical research by using data to predict protein folding, which is already speeding up drug discovery. ... Tech giants like Google and Facebook are increasing their data science teams by 20% annually, underscoring the essential role these experts play in unlocking actionable insights from vast datasets. This growing demand reflects the importance of data-driven decision-making across industries. ... AI and machine learning will also continue to revolutionize big data, playing a critical role in data-driven decision-making across industries. By 2025, AI is expected to generate $3.9 trillion in business value, with organizations leveraging these technologies to automate complex processes and extract valuable insights. 


Five Steps for Creating Responsible, Reliable, and Trustworthy AI

Model testing with human oversight is critically important. It allows data scientists to ensure the models they’ve built function as intended and root out any possible errors, anomalies, or biases. However, organizations should not rely solely on the acumen of their data scientists. Enlisting the input of business leaders who are close to the customers can help ensure that the models appropriately address customers’ needs. Being involved in the testing process also gives them a unique perspective that will allow them to explain the process to customers and alleviate their concerns.Be transparent Many organizations do not trust information from an opaque “black box.” They want to know how a model is trained and the methods it uses to craft its responses. Secrecy as to the model development and data computation processes will only serve to engender further skepticism in the model’s output. ... Continuous improvement might be the final step in creating trusted AI, but it’s just part of an ongoing process. Organizations must continue to capture, cultivate, and feed data into the model to keep it relevant. They must also consider customer feedback and recommendations on ways to improve their models. These steps form an essential foundation for trustworthy AI, but they’re not the only practices organizations should follow. 


With 'TPUXtract,' Attackers Can Steal Orgs' AI Models

The NCSU researchers used a Riscure EM probe station with a motorized XYZ table to scan the chip's surface, and a high sensitivity electromagnetic probe for capturing its weak radio signals. A Picoscope 6000E oscilloscope recorded the traces, Riscure's icWaves field-programmable gate array (FPGA) device aligned them in real-time, and the icWaves transceiver used bandpass filters and AM/FM demodulation to translate and filter out irrelevant signals. As tricky and costly as it may be for an individual hacker, Kurian says, "It can be a competing company who wants to do this, [and they could] in a matter of a few days. For example, a competitor wants to develop [a copy of] ChatGPT without doing all of the work. This is something that they can do to save a lot of money." Intellectual property theft, though, is just one potential reason anyone might want to steal an AI model. Malicious adversaries might also benefit from observing the knobs and dials controlling a popular AI model, so they can probe them for cybersecurity vulnerabilities. And for the especially ambitious, the researchers also cited four studies that focused on stealing regular neural network parameters. 


Artificial Intelligence Looms Large at Black Hat Europe

From a business standpoint, advances in AI are going to "make those predictions faster and faster, cheaper and cheaper," he said. Accordingly, "if I was in the business of security, I would try to make all of my problems prediction problems," so they could get solved by using prediction engines. What exactly these prediction problems might be remains an open question, although Zanero said other good use cases include analyzing code, and extracting information from unstructured text - for example, analyzing logs for cyberthreat intelligence purposes. "So it accelerates your investigation, but you still have to verify it," Moss said. "The verify part escapes most students," Zanero said. "I say that from experience." One verification challenge is AI often functions like a very complex, black box API, and people have to adapt their prompt to get the proper output, he said. The problem: that approach only works well when you know what the right answer should be, and can thus validate what the machine learning model is doing. "The real problematic areas in all machine learning - not just using LLMs - is what happens if you do not know the answer, and you try to get the model to give you knowledge that you didn't have before," Zanero said. "That's a deep area of research work."



Quote for the day:

"The only person you are destined to become is the person you decide to be." -- Ralph Waldo Emerson