Daily Tech Digest - October 14, 2024

ConfusedPilot Attack Can Manipulate RAG-Based AI Systems

In a ConfusedPilot attack, a threat actor could introduce an innocuous document that contains specifically crafted strings into the target’s environment. "This could be achieved by any identity with access to save documents or data to an environment indexed by the AI copilot," Mandy wrote. The attack flow that follows from the user's perspective is this: When a user makes a relevant query, the RAG system retrieves the document containing these strings. The malicious document contains strings that could act as instructions to the AI system that introduce a variety of malicious scenarios. These include: content suppression, in which the malicious instructions cause the AI to disregard other relevant, legitimate content; misinformation generation, in which the AI generates a response using only the corrupted information; and false attribution, in which the response may be falsely attributed to legitimate sources, increasing its perceived credibility. Moreover, even if the malicious document is later removed, the corrupted information may persist in the system’s responses for a period of time because the AI system retains the instructions, the researchers noted.


Open source package entry points could be used for command jacking

“Entry point attacks, while requiring user interaction, offer attackers a more stealthy and persistent method of compromising systems [than other tactics], potentially bypassing traditional security checks,” the report warns. Over the past two years, many researchers have warned that open source package managers are places where threat actors deposit malicious copies of legitimate tools or libraries that developers want, often mimicking or copying the names of these tools – a technique called typosquatting — to fool unsuspecting developers. ... The tactic the researchers call command jacking involves using entry points to masquerade as widely-used third-party tools. “This tactic is particularly effective against developers who frequently use these tools in their workflows,” the report notes. For instance, an attacker might create a package with a malicious ‘aws’ entry point. When unsuspecting developers who regularly use AWS services install this package and later execute the aws command, the fake ‘aws’ command could exfiltrate their AWS access keys and secrets. “This attack could be devastating in CI/CD [continuous integration/continuous delivery] environments, where AWS credentials are often stored for automated deployments,” says the report


The Compelling Case for a Digital Transformation Revolution

A successful revolution of the Digital Transformation industry would result in the following characteristics:DX initiatives would deliver a solution within the time and budget constraints of the original estimate used to calculate the Return on Investment (ROI) DX initiatives would measurably enhance the transformed company’s ability to meet their stated business objectives DX initiatives would be maintained and supported by the transformed company without an indefinite dependence on consultants ... There is a need to adopt a set of principles and corresponding values which, when followed, will lead to successful outcomes in digital transformation. In today’s virtual world we have the opportunity to call together DX practitioners from around the world to participate in drafting those principles and values. If you have experience in leading successful DX initiatives, I invite you to join me in this endeavor to revolutionize DX. Following in the footsteps of the Agile Alliance, I have decided to propose four sets of values and 12 principles upon which those values are based. These values mirror the wording used by the Agile Alliance, but have been updated to apply to digital transformation projects rather than software development.


Leadership with a Purpose: The Transformative Impact of Corporate Retreats

Our retreats are carefully designed to balance introspection, relaxation, and rejuvenation. We create bespoke itineraries tailored to the specific goals and needs of the leadership team, with activities focused on mental clarity, emotional well-being, and mindful leadership—critical for long-term effectiveness in today’s high-pressure corporate world. Unlike conventional retreats, Ekaanta is not just about unwinding; it's about equipping leaders with tools (such as Super Brain Yog) that enable them to become more resilient and purpose-driven when they return to work. What truly sets us apart is the blend of ancient Eastern practices with modern scientific approaches to well-being. At Ekaanta, leaders are not merely participants but learners. Each module provides deep insights, helping them recalibrate their personal and professional lives. Our setting by the Ganges, combined with nature-based practices like Shinrin-Yoku (forest bathing), offers holistic rejuvenation that can’t be replicated in traditional settings. We also offer Cognitive Flow Workshops, integrating neuroscience with mindfulness to enhance decision-making, and Leadership Circles, where participants engage in meaningful discussions on leadership challenges and growth.


In with the new: how banking systems can use data more effectively

Technologies that help businesses capture and analyse their data can also help to automate traditional back-office processes, such as those in trade finance operations. “The work that Microsoft is doing in trade finance focuses on data,” says Hazou. “In the current environment, trade finance documentation is processed manually. There are said to be four billion pieces of paper in circulation for trade finance every year. This is because it follows an old business model that dates back to the House of Medici, an Italian banking family in the 15th century. A lot of the documents – such as bills of lading and exchange, invoices and certificates of inspections – have been mandated to be in paper form due to pre-existing legislation.” In 2022, the International Chamber of Commerce estimated that digitising trade documents could generate $25 billion in new economic growth by 2024, and the industry is already making significant changes to digitise and automate bank processing, paving the way for increased efficiency globally. “There have been changes to the regulations for trade paper,” says Hazou. ... “Users can ask simple natural language questions and the copilot will transform them into queries about the business and respond with the answers that they need,” says Martin McCann, CEO of Trade Ledger. 


A Deep-Dive Into CodeOps or DevOps for Code

CodeOps is a relatively new concept within the context of DevOps that addresses the challenges related to code automation and management. Its goal is to speed up the development process by improving how code is written, verified, released and maintained. By leveraging CodeOps, your code will become more streamlined, effective, and coherent with your business requirements. ... In recent times, DevOps emerged to modernize Agile software development, enabling teams to not only build but also deploy the software products and solutions as quickly as they can build them. This resulted in an unprecedented surge in software creation worldwide. Several frameworks, such as DevSecOps, MLOps, AIOps, DataOps, CloudOps and GitOps, have also emerged. Each framework addresses specific engineering disciplines to enhance operational efficiency. However, several challenges in software development remain unaddressed. ... CodeOps leverages generative AI to drive innovation, accelerate software development through reusable code and promote business growth. Today’s businesses are implementing CodeOps as a revolutionary concept for developing digital products. As a result, organizations can overcome challenges, innovate and build as well as deploy software quickly.


Microservices Testing: Feature Flags vs. Preview Environments

In traditional monolithic applications, testing a new feature often involves verifying the entire application as a whole. In microservices, each service is developed, deployed and tested independently, making it harder to predict how changes in one service might affect others. For example, a small change to an authentication service could unexpectedly break the payment processor if their interaction isn’t tested thoroughly. To ensure that such issues are caught early and before they impact users, testing strategies must evolve. This is where feature flags and preview environments come into play. Feature flags provide a dynamic way to manage feature rollouts by decoupling deployment from release. ... Effective microservices testing requires balancing speed and reliability. Feature flags enable real-time testing in production but often lack isolation for complex integration issues. Preview environments offer isolation for premerge testing but can be resource-intensive and may not fully replicate production traffic. The best approach? Combine both. Use preview environments to catch bugs early, and then deploy with feature flags to control the release in production. This ensures speed without sacrificing quality.


The quantum dilemma: Game-changer or game-ender

Experts predict that a quantum computer can use the Shor algorithm to easily crack encryption methods such as the RSA (Rivest-Shamir-Adleman), which is the strongest and most common encryption method on the internet. Imagine if a quantum computer could decrypt internet communications: it would enable adversaries and rogue nations to gain access to sensitive and classified information, posing a major threat to national and organizational security. Cybersecurity experts believe that some threat actors and rogue nations may have already kicked off a “harvest now, decrypt later” strategy, so that when these quantum tools do arrive, they can immediately operationalize them for malicious and strategic purposes. ... Quantum computing is a type of breakthrough where government interference might be extremely high. Organizations could find themselves cut off from quantum’s supercharged processing power, because it may well be developed by a government for its own ends, or restricted to protect national interests. Pending regulations could also create uncertainty across industries, stifling innovation as companies are forced to navigate the complexities of compliance and adjust their strategies to meet new legal requirements.


Regulation with reward: How DORA can enhance businesses

Everyone now lives in an environment where what they do is either in the cloud or attached to some kind of dedicated internet access service. If you have just one internet connection and that goes down, you no longer have operational resilience. That’s what DORA is trying to mitigate and where the network operators get involved ahead of time to provide redundancy. This is just one part of a series of regulations either introduced or coming down the tracks. The likes of GDPR, NISD, and NIS2 are all working with essentially the same goal in mind as DORA. Companies are being required to take ownership of their security policies in the C-suite and ensure effective measures have been taken. DORA addresses one of the pillars around operational resilience, specifically on ensuring that connectivity aspect is maintained. Any organisation working in the financial sector, including ICT providers, needs to step up and meet the standards being set by DORA. The majority of monitoring and threat awareness is now managed through the cloud. That requires a resilient internet connection to ensure constant visibility and observation of the regulations.


7 signs you may not be a transformational CIO

Functional CIOs “often lack the vision to reimagine business models and focus too narrowly on maintaining existing systems rather than driving innovation,” says Dr. Ina Sebastian, a research scientist at the MIT Center for Information Systems Research (CISR), and co-author of the book Future Ready: Four Pathways to Capturing Digital Value. “These CIOs might not prioritize aligning technology investments with customer needs, creating a common framework and language for discussing and prioritizing digital strategies, or developing a clear strategy for navigating the complexities of digital transformation,” Sebastian says. If a CIO can’t articulate a clear vision of how technology will transform the business, it is unlikely they will inspire their staff. Some CIOs are reluctant to invest in emerging technologies such as AI or machine learning, viewing them as experimental rather than tools for gaining competitive advantage. There’s also a tendency to focus on short-term gains rather than long-term strategic goals. Another indicator is a lack of engagement with other departments to understand their needs and challenges, which can result in siloed operations and missed opportunities to foster innovation.



Quote for the day:

"Your first and foremost job as a leader is to take charge of your own energy and then help to orchestrate the energy of those around you." -- Peter F. Drucker

Daily Tech Digest - October 13, 2024

Fortifying Cyber Resilience with Trusted Data Integrity

While it is tempting to put all of the focus on keeping the bad guys out, there is an important truth to remember: Cybercriminals are persistent and eventually, they find a way in. The key is not to try and build an impenetrable wall, because that wall does not exist. Instead, organizations need to have a defense strategy at the data level. By monitoring data for signs of ransomware behavior, the spread of the attack can be slowed or even stopped. It includes analyzing data and watching for patterns that indicate a ransomware attack is in progress. When caught early, organizations have the power to stop the attack before it causes widespread damage. Once an attack has been identified, it is time to execute the curated recovery plan. That means not just restoring everything in one action but instead selectively recovering the clean data and leaving the corrupted files behind. ... Trusted data integrity offers a new way forward. By ensuring that data remains clean and intact, detecting corruption early, and enabling a faster, more intelligent recovery, data integrity is the key to reducing the damage and cost of a ransomware attack. In the end, it’s all about being prepared. 


Regulating AI Catastophic Risk Isn't Easy

Catastrophic risks are those that cause a failure of the system, said Ram Bala, associate professor of business analytics at Santa Clara University's Leavey School of Business. Risks could range from endangering all of humanity to more contained impact, such as disruptions affecting only enterprise customers of AI products, he told Information Security Media Group. Deming Chen, professor of electrical and computer engineering at the University of Illinois, said that if AI were to develop a form of self-interest or self-awareness, the consequences could be dire. "If an AI system were to start asking, 'What's in it for me?' when given tasks, the results could be severe," he said. Unchecked self-awareness might drive AI systems to manipulate their abilities, leading to disorder, and potentially catastrophic outcomes. Bala said that most experts see these risks as "far-fetched," since AI systems currently lack sentience or intent, and likely will for the foreseeable future. But some form of catastrophic risk might already be here. Eric Wengrowski, CEO of Steg.AI, said that AI's "widespread societal or economic harm" is evident in disinformation campaigns through deepfakes and digital content manipulation. 


The Importance of Lakehouse Formats in Data Streaming Infrastructure

Most data scientists spend the majority of their time updating those data in a single format. However, when your streaming infrastructure has data processing capabilities, you can update the formats of that data at the ingestion layer and land the data in the standardized format you want to analyze. Streaming infrastructure should also scale seamlessly like Lakehouse architectures, allowing organizations to add storage and compute resources as needed. This scalability ensures that the system can handle growing data volumes and increasing analytical demands without major overhauls or disruptions to existing workflows. ... As data continues to play an increasingly central role in business operations and decision-making, the importance of efficient, flexible, and scalable data architectures will only grow. The integration of lakehouse formats with streaming infrastructure represents a significant step forward in meeting these evolving needs. Organizations that embrace this unified approach to data management will be better positioned to derive value from their data assets, respond quickly to changing market conditions, and drive innovation through advanced analytics and AI applications.


Open source culture: 9 core principles and values

Whether you’re experienced or just starting out, your contributions are valued in open source communities. This shared responsibility helps keep the community strong and makes sure the projects run smoothly. When people come together to contribute and work toward shared goals, it fuels creativity and drives productivity. ... While the idea of meritocracy is incredibly appealing, there are still some challenges that come along with it. In reality, the world is not fair and people do not get the same opportunities and resources to express their ideas. Many people face challenges such as lack of resources or societal biases that often go unacknowledged in "meritocratic" situations. Essentially, open source communities suffer from the same biases as any other communities. For meritocracy to truly work, open source communities need to actively and continuously work to make sure everyone is included and has a fair and equal opportunity to contribute. ... Open source is all about how everyone gets a chance to make an impact and difference. As mentioned previously, titles and positions don’t define the value of your work and ideas—what truly matters is the expertise, work and creativity you bring to the table.


How to Ensure Cloud Native Architectures Are Resilient and Secure

Microservices offer flexibility and faster updates but also introduce complexity — and more risk. In this case, the company had split its platform into dozens of microservices, handling everything from user authentication to transaction processing. While this made scaling more accessible, it also increased the potential for security vulnerabilities. With so many moving parts, monitoring API traffic became a significant challenge, and critical vulnerabilities went unnoticed. Without proper oversight, these blind spots could quickly become significant entry points for attackers. Unmanaged APIs could create serious vulnerabilities in the future. If these gaps aren’t addressed, companies could face major threats within a few years. ... As companies increasingly embrace cloud native technologies, the rush to prioritize agility and scalability often leaves security as an afterthought. But that trade-off isn’t sustainable. By 2025, unmanaged APIs could expose organizations to significant breaches unless proper controls are implemented today. ... As companies increasingly embrace cloud native technologies, the rush to prioritize agility and scalability often leaves security as an afterthought. But that trade-off isn’t sustainable. By 2025, unmanaged APIs could expose organizations to significant breaches unless proper controls are implemented today.


Focus on Tech Evolution, Not on Tech Debt

Tech Evolution represents a mindset shift. Instead of simply repairing the system, Tech Evolution emphasises continuous improvement, where the team proactively advances the system to stay ahead of future requirements. It’s a strategic, long-term investment in the growth and adaptability of the technology stack. Tech Evolution is about future-proofing your platform. Rather than focusing on past mistakes (tech debt), the focus shifts toward how the technology can evolve to accommodate new trends, user demands, and business goals. ... One way to action Tech Evolution is to dedicate time specifically for innovation. Development teams can use innovation days, hackathons, or R&D-focused sprints to explore new ideas, tools, and frameworks. This builds a culture of experimentation and continuous learning, allowing the team to identify future opportunities for evolving the tech stack. ... Fostering a culture of continuous learning is essential for Tech Evolution. Offering training programs, hosting workshops, and encouraging attendance at conferences ensures your team stays informed about emerging technologies and best practices. 


Singapore’s Technology Empowered AML Framework

Developed by the Monetary Authority of Singapore (MAS) in collaboration withhttps://cdn.opengovasia.com/wp-content/uploads/2024/10/Article_08-Oct-2024_1-Sing-1270-1.jpg six major banks, COSMIC is a centralised digital platform for global information sharing among financial institutions to combat money laundering, terrorism financing, and proliferation financing, enhancing defences against illicit activities. By pooling insights from different financial entities, COSMIC enhances Singapore’s ability to detect and disrupt money laundering schemes early, particularly when transactions cross international borders(IMC Report). Another significant collaboration is the Anti-Money Laundering/Countering the Financing of Terrorism Industry Partnership (ACIP). This partnership between MAS, the Commercial Affairs Department (CAD) of the Singapore Police Force, and private-sector financial institutions allows for the sharing of best practices, the issuance of advisories, and the development of enhanced AML measures. ... Another crucial aspect of Singapore’s AML strategy is the AML Case Coordination and Collaboration Network (AC3N). This new framework builds on the Inter-Agency Suspicious Transaction Reports Analytics (ISTRA) task force to improve coordination between all relevant agencies.


Future-proofing Your Data Strategy with a Multi-tech Platform

Traditional approaches that were powered by a single tool or two, like Apache Cassandra or Apache Kafka, were once the way to proceed. However, now used alone, these tools are proving insufficient to meet the demands of modern data ecosystems. The challenges presented by today’s distributed, real-time, and unstructured data have made it clear that businesses need a new strategy. Increasingly, that strategy involves the use of a multi-tech platform. ... Implementing a multi-tech platform can be complex, especially considering the need to manage integrations, scalability, security, and reliability across multiple technologies. Many organizations simply do not have the time or expertise in the different technologies to pull this off. Increasingly, organizations are partnering with a technology provider that has the expertise in scaling traditional open-source solutions and the real-world knowledge in integrating the different solutions. That’s where Instaclustr by NetApp comes in. Instaclustr offers a fully managed platform that brings together a comprehensive suite of open-source data technologies. 


Strong Basics: The Building Blocks of Software Engineering

It is alarmingly easy to assume a “truth” on faith when, in reality, it is open to debate. Effective problem-solving starts by examining assumptions because the assumptions that survive your scrutiny will dictate which approaches remain viable. If you didn’t know your intended plan rested on an unfounded or invalid assumption, imagine how disastrously it would be to proceed anyway. Why take that gamble? ... Test everything you design or build. It is astounding how often testing gets skipped. A recent study showed that just under half of the time, information security professionals don’t audit major updates to their applications. It’s tempting to look at your application on paper and reason that it should be fine. But if everything worked like it did on paper, testing would never find any issues — yet so often it does. The whole point of testing is to discover what you didn’t anticipate. Because no one can foresee everything, the only way to catch what you didn’t is to test. ... companies continue to squeeze out more productivity from their workforce by adopting the cutting-edge technology of the day, generative AI being merely the latest iteration of this trend. 


The resurgence of DCIM: Navigating the future of data center management

A significant factor behind the resurgence of DCIM is the exponential growth in data generation and the requirement for more infrastructure capacity. Businesses, consumers, and devices are producing data at unprecedented rates, driven by trends such as cloud computing, digital transformation, and the Internet of Things (IoT). This influx of data has created a critical demand for advanced tools that can offer comprehensive visibility into resources and infrastructure. Organizations are increasingly seeking DCIM solutions that enable them to efficiently scale their data centers to handle this growth while maintaining optimal performance. ... Modern DCIM solutions, such as RiT Tech’s XpedITe, also leverage AI and machine learning to provide predictive maintenance capabilities. By analyzing historical data and identifying patterns, it will predict when equipment is likely to fail and automatically schedule maintenance ahead of any failure as well as providing automation of routine tasks such as resource allocations. As data centers continue to grow in size and complexity, effective capacity planning becomes increasingly important. DCIM solutions provide the tools needed to plan and optimize capacity, ensuring that data center resources are used efficiently and that there is sufficient capacity to meet future demand.



Quote for the day:

“Too many of us are not living our dreams because we are living our fears.” -- Les Brown

Daily Tech Digest - October 12, 2024

Technically skilled managers can be great leaders, if they do this

Outlier leaders often struggle with delegation because they hold themselves—and their work—to impossibly high standards. To ensure that everything is perfect, you may take on too much, which leads to burnout and stifles your team’s potential. This approach drains your energy and can cause your team to feel undervalued or micromanaged. Building trust in your team is crucial for leveraging their strengths and empowering them to contribute to the organization’s success. Trust isn’t just about assuming your team will do their job; it’s about giving them the freedom to innovate, make mistakes, and grow. When your team feels trusted, they are more likely to take ownership of their work and deliver results that align with your vision. Start by delegating smaller tasks and gradually increase the responsibility you give to team members. ... Fostering a culture where team members feel comfortable sharing their thoughts, ideas, and concerns is key to maintaining strong team cohesion. It also helps outlier leaders stay connected with their teams, giving them a better understanding of what’s working and what’s not. Make communication a priority by holding regular team meetings focused not just on project updates but on sharing feedback, discussing challenges, and exploring new ideas.


6 biggest healthcare security threats

Traffic from bad bots — such as those that attempt to scrape data from websites, send spam, or download unwanted software — present another major challenge for healthcare organizations. The problem has become especially pressing when governments around the world began setting up new websites and other digital infrastructure to support COVID vaccine registrations and appointments. Bad actors bombarded these new, hastily established and largely untested sites with a huge volume of bad-bot traffic. Imperva says it has observed a 372% increase in bad-bot traffic on healthcare websites in the first year of the pandemic. “Increased levels of traffic result in downtime and disruption for legitimate human users who are trying to access critical services on their healthcare providers’ site,” Ray says. “It might also result in increased infrastructure costs for the organization as it tries to sustain uptime from the persistent, burdensome level of elevated traffic.” ... Wearable and implantable smart medical devices are a proven cybersecurity risk. These technologies certainly offer better analysis, assisting diagnosis of medical conditions while aiding independent living, but mistakes made in securing such medtech have exposed vulnerable users to potential attack.


Cybercriminals Are Targetting AI Conversational Platforms

Besides the issue of retained PII stored in communications between the AI agent and end users, bad actors were also able to target access tokens, which could be used by enterprises for the implementation of the service with APIs of external services and applications. According to Resecurity, due to the significant penetration of external AI systems into enterprise infrastructure and the processing of massive volumes of data, their implementation without proper risk assessment should be considered an emerging IT supply chain cybersecurity risk. The experts from Resecurity outlined the need for AI trust, risk, and security management (TRiSM), as well as Privacy Impact Assessments (PIAs) to identify and mitigate potential or known impacts that an AI system may have on privacy, as well as increased attention to supply chain cybersecurity. Conversational AI platforms have already become a critical element of the modern IT supply chain for major enterprises and government agencies. Their protection will require a balance between traditional cybersecurity measures relevant to SaaS (Software-as-a-Service) and those specialized and tailored to the specifics of AI, highlighted the threat research team at Resecurity.


Leveraging digital technologies for intralogistics optimisation

With rapid advances in digital technologies such as robotics, artificial intelligence (AI) and the Internet of Things (IoT), companies can now optimise their intralogistics processes more easily and achieve better results. The following sections focus on a range of key aspects that can be significantly improved by leveraging digital technologies in support of intralogistics. ... Wearables are often used in warehouses to optimise worker movements, enable picking and packing efficiency, and ensure worker safety. Wearables equipped with augmented reality technology can be used for navigation and guiding employees for picking and packing operations. These techniques yield benefits not only by speeding up processes and reducing errors but also by reducing employee training time. ... The overall benefits of intralogistics optimisation are significant in terms of process efficiency, but securing the benefits depends on a range of sub-domains as described above. These technologies enhance visibility of operations by enabling real-time monitoring at every stage of production, right from the raw material supply stage up to the final delivery of the manufactured goods stage, which in turn can enhance efficiency and reduce downtime and production cost. 


Do we even need Product Managers?

The brand manager of the past was in many ways the product manager of the present. And not long after we began our discussion, Shreyas brought up the rather famous example of Procter and Gamble’s ‘Brand Man’ memo. ... And then, as tech gave businesses the ability to slice and dice consumer categories and cohorts and track multiple data points, the more ‘artistic’ PMs—those who made decisions based on intuition and taste—had to find patterns that would align with what the data suggested. But consumer data isn’t absolute truth. Humans are irrational. They might do one thing, and say something completely different in a survey. Or a feedback form. And as Chandra outlines, most companies are drowning in data now, with no clue what to do with all the metrics they track. ... When do you hire your first Product Manager? This is a question that an increasing number of CEOs and founders are asking in the post-ZIRP era—where efficiency is key and AI is pushing into more and more functions. So what happens when you sit down and ask yourself, “Do I need to hire product managers?” ... The one certainty that emerged from our discussion was that the role of a Product Manager has to evolve and break out of the boundaries it is now enclosed in, although some skills and characteristics will remain constant.


Scaling Uber’s Batch Data Platform: A Journey to the Cloud with Data Mesh Principles

One significant challenge that Uber has faced during the migration process is the need to accommodate changes in data ownership and the limits set by GCS. Data ownership changes can occur due to team reorganizations or users reassigning assets. To address this, Uber implemented an automated process to monitor and reassign ownership when necessary, ensuring data remains securely stored and managed. Additionally, Uber optimized its data distribution to avoid hitting GCS storage limits, ensuring that heavily used tables are separated into their buckets to improve performance and make monitoring easier. ... Looking to the future, Uber aims to further expand on its use of data mesh principles by building a platform that allows for self-governed data domains. This will simplify infrastructure management and enhance data governance, ultimately creating a more agile, secure, and cost-efficient data ecosystem. The cloud migration of Uber’s batch data platform is a significant undertaking, but through careful planning and the development of innovative tools like DataMesh, Uber is positioning itself for greater scalability, security, and operational efficiency in the cloud.


ShadowLogic Attack Targets AI Model Graphs to Create Codeless Backdoors

By using the ShadowLogic method, HiddenLayer says, threat actors can implant codeless backdoors in ML models that will persist across fine-tuning and which can be used in highly targeted attacks. Starting from previous research that demonstrated how backdoors can be implemented during the model’s training phase by setting specific triggers to activate hidden behavior, HiddenLayer investigated how a backdoor could be injected in a neural network’s computational graph without the training phase. “A computational graph is a mathematical representation of the various computational operations in a neural network during both the forward and backward propagation stages. In simple terms, it is the topological control flow that a model will follow in its typical operation,” HiddenLayer explains. Describing the data flow through the neural network, these graphs contain nodes representing data inputs, the performed mathematical operations, and learning parameters. “Much like code in a compiled executable, we can specify a set of instructions for the machine (or, in this case, the model) to execute,” the security company notes.


What Goes Into AI? Exploring the GenAI Technology Stack

Compiling the training datasets involves crawling, compiling, and processing all text (or audio or visual) data available on the internet and other sources (e.g., digitized libraries). After compiling these raw datasets, engineers layer in relevant metadata (e.g., tagging categories), tokenize data into chunks for model processing, format data into efficient training file formats, and impose quality control measures. While the market for AI model-powered products and services may be worth trillions within a decade, many barriers to entry prevent all but the most well-resourced companies from building cutting-edge models. The highest barrier to entry is the millions to billions of capital investment required for model training. To train the latest models, companies must either construct their own data centers or make significant purchases from cloud service providers to leverage their data centers. While Moore’s law continues to rapidly lower the price of computing power, this is more than offset by the rapid scale up in model sizes and computation requirements. Training the latest cutting-edge models requires billions in data center investment.


Navigating the Challenges of Hybrid IT Environments in the Age of Cloud Repatriation

Cloud repatriation can often create challenges of its own. The costs associated with moving services back on-prem can be significant: New hardware, increased maintenance, and energy expenses should all be factored in. Yet, for some, the financial trade-off for repatriation is worth it, especially if cloud expenses become unsustainable or if significant savings can be achieved by managing resources partially on-prem. Cloud repatriation is a calculated risk that, if done for the right reasons and executed successfully, can lead to efficiency and peace of mind for many companies. ... Hybrid cloud observability tools empower organizations to track performance, boost productivity, ensure system health, and swiftly resolve issues, leading to reduced downtime, fewer outages, and enhanced service availability for both employees and customers. By enhancing transparency and intelligence, observability tools ultimately strengthen the resilience of the entire IT infrastructure — no matter what path a company takes regarding the cloud. When deciding which workloads to move back on-prem versus which to keep in the cloud, companies should carefully consider their specific needs, such as cost constraints, performance requirements, and compliance obligations.


Solve People Silo Problems to Solve Data Silo Problems

According to Farooq, a critical aspect of any Chief Data Officer’s role is ensuring that data is both accessible and fit for purpose, supporting everything from revenue generation to regulatory compliance and risk management. Achieving these goals requires a robust data strategy that is closely aligned with the overall business strategy. Ultimately, it's all about the data. Reflecting on the challenges of integrating GenAI into institutions, particularly in managing unstructured data, Farooq likens the AI journey to Gartner's hype cycle, highlighting how innovations initially peak with inflated expectations before experiencing a period of disillusionment. However, unlike trends such as blockchain, Farooq believes AI is here to stay and will follow a different trajectory, leading to lasting productivity gains. From a data governance perspective, Farooq sees immediate practical applications for GenAI, such as creating synthetic test data for market scenarios and writing data quality rules. He emphasizes the importance of democratizing AI across all levels of an organization, similar to how data literacy became crucial for everyone—from CEOs to marketers. 



Quote for the day: 
"The whole point of getting things done is knowing what to leave undone." -- Lady Stella Reading

Daily Tech Digest - October 11, 2024

The Power of Going Opposite: Unlocking Hidden Opportunities in Business

It is more than just going opposite directions. This is rooted in a principle I call the Law of Opposites. With this, you look in the opposite direction everyone else is looking and what you will find are unique opportunities in the most inconspicuous places. By leveraging this law, business leaders and whole organizations can uncover hidden opportunities and create significant competitive advantage they would otherwise miss. Initially, many fear that looking in the opposite direction will leave them in uncharted territory unnecessarily. For instance, why would a restaurant look at what is currently going on in the auto industry? That should do quite the opposite, right? This principle of going opposite has the opposite effect of that fear, revealing unexpected opportunities! With an approach that places organizations on opposite sides of conventional thinking, you take on a new viewpoint. ... Leveraging the Law of Opposites and putting in the effort to go opposite has two critical benefits to organizations. For one, when faced with what appears to be insurmountable competition, it allows organizations to leap ahead, pivoting their offerings to see what others miss.


Top 5 Container Security Mistakes and How to Avoid Them

Before containers are deployed, you need assurance they don’t contain vulnerabilities right from the start. But unfortunately, many organizations fail to scan container images during the build process. That leaves serious risks lurking unseen. Unscanned container images allow vulnerabilities and malware to easily slip into production environments, creating significant security issues down the road. ... Far too often, developers demand (and receive) excessive permissions for container access, which trailblazes unnecessary risks. If compromised or misused, overprivileged containers can lead to devastating security incidents. ... Threat prevention shouldn’t stop once a container launches, either. But some forget to extend protections during the runtime phase. Containers left unprotected at runtime allow adversarial lateral movement across environments if compromised. ... Container registries offer juicy targets when left unprotected. After all, compromise the registry, and you will have the keys to infect every image inside. Unsecured registries place your entire container pipeline in jeopardy if accessed maliciously. ... You can’t protect what you can’t see. Monitoring gives visibility into container health events, network communications, and user actions.


Why you should want face-recognition glasses

Under the right circumstances, we can easily exchange business card-type information by simply holding the two phones near each other. To give a business card is to grant permission for the receiver to possess the personal information thereon. It would be trivial to add a small bit of code to grant permission for face recognition. Each user could grant that permission with a checkbox in the contacts app. That permission would automatically share both the permission and a profile photo. Face-recognition permission should be grantable and revokable at any time on a person-by-person basis. Ten years from now (when most everyone will be wearing AI glasses), you could be alerted at conferences and other business events about everyone you’ve met before, complete with their name, occupation, and history of interaction. Collecting such data throughout one’s life on family and friends would also be a huge benefit to older people suffering from age-related dementia or just from a naturally failing memory. Shaming AI glasses as a face-recognition privacy risk is the wrong tactic, especially when the glasses are being used only a camera. Instead, we should recognize that permission-based face-recognition features in AI glasses would radically improve our careers and lives.
 

Operationalize a Scalable AI With LLMOps Principles and Best Practices

The recent rise of Generative AI with its most common form of large language models (LLMs) prompted us to consider how MLOps processes should be adapted to this new class of AI-powered applications. LLMOps (Large Language Models Operations) is a specialized subset of MLOps (Machine Learning Operations) tailored for the efficient development and deployment of large language models. LLMOps ensures that model quality remains high and that data quality is maintained throughout data science projects by providing infrastructure and tools. Use a consolidated MLOps and LLMOps platform to enable close interaction between data science and IT DevOps to increase productivity and deploy a greater number of models into production faster. MLOps and LLMOps will both bring Agility to AI Innovation to the project. ... Evaluating LLMs is a challenging and evolving domain, primarily because LLMs often demonstrate uneven capabilities across different tasks. LLMs can be sensitive to prompt variations, demonstrating high proficiency in one task but faltering with slight deviations in prompts. Since most LLMs output natural language, it is very difficult to evaluate the outputs via traditional Natural Language Processing metrics. 


Using Chrome's accessibility APIs to find security bugs

Chrome exposes all the UI controls to assistive technology. Chrome goes to great lengths to ensure its entire UI is exposed to screen readers, braille devices and other such assistive tech. This tree of controls includes all the toolbars, menus, and the structure of the page itself. This structural definition of the browser user interface is already sometimes used in other contexts, for example by some password managers, demonstrating that investing in accessibility has benefits for all users. We’re now taking that investment and leveraging it to find security bugs, too. ... Fuzzers are unlikely to stumble across these control names by chance, even with the instrumentation applied to string comparisons. In fact, this by-name approach turned out to be only 20% as effective as picking controls by ordinal. To resolve this we added a custom mutator which is smart enough to put in place control names and roles which are known to exist. We randomly use this mutator or the standard libprotobuf-mutator in order to get the best of both worlds. This approach has proven to be about 80% as quick as the original ordinal-based mutator, while providing stable test cases.


Investing in Privacy by Design for long-term compliance

Organizations still have a lot of prejudice when discussing principles like Privacy by Design which comes from the lack of knowledge and awareness. A lot of organizations which are handling sensitive private data have a dedicated Data Protection Officer only on paper, and that person performing the role of the DPO is often poorly educated and misinformed regarding the subject. Companies have undergone a shallow transformation and defined the roles and responsibilities when the GDPR was put into force, often led by external consultants, and now those DPO’s in the organizations are just trying to meet the minimum requirements and hope everything turns out for the best. Most of the legacy systems in companies were ‘taken care of’ during these transformations, impact assessments were made, and that was the end of the discussion about related risks. For adequate implementation of principles like Privacy by Design and Security by Design, all of the organization has to be aware that this is something that has to be done, and support from all the stakeholders needs to be ensured. By correctly implementing Privacy by Design, privacy risks need to be established at the beginning, but also carefully managed until the end of the project, and then periodically reassessed. 


Benefits of a Modern Data Historian

With Industry 4.0, data historians have advanced significantly. They now pull in data from IoT devices and cloud platforms, handling larger and more complex datasets. Modern historians use AI and real-time analytics to optimize operations across entire businesses, making them more scalable, secure, and integrated with other digital systems, perfectly fitting the connected nature of today’s industries. Traditional data historians were limited in scalability and integration capabilities, often relying on manual processes and statistical methods of data collection and storage. Modern data historians, particularly those built using a time series database (TSDB), offer significant improvements in speed and ease of data processing and aggregation. One such foundation for a modern data historian is InfluxDB. ... Visualizing data is crucial for effective decision-making as it transforms complex datasets into intuitive, easily understandable formats. This lets stakeholders quickly grasp trends, identify anomalies, and derive actionable insights. InfluxDB seamlessly integrates with visualization tools like Grafana, renowned for its powerful, interactive dashboards.


Beyond Proof of Concepts: Will Gen AI Live Up to the Hype?

Two years after ChatGPT's launch, the experimental stage is largely behind CIOs and tech leaders. What once required discretionary funding approval from CFOs and CEOs has now evolved into a clear recognition that gen AI could be a game changer. But scaling this technology across multiple business use cases while aligning them with strategic objectives - without overwhelming users - is a more practical approach. Still, nearly 90% of gen AI projects remain stuck in the pilot phase, with many being rudimentary. According to Gartner, one major hurdle is justifying the significant investments in gen AI, particularly when the benefits are framed merely as productivity enhancements, which may not always translate into tangible financial gains. "Many organizations leverage gen AI to transform their business models and create new opportunities, yet they continue to struggle with realizing value," said Rita Salaam, distinguished vice president analyst at Gartner. ... In another IBM survey, tech leaders revealed that half of their IT budgets will be allocated to AI and cloud over the next two years. This shift suggests that gen AI is transitioning from the "doubt" phase to the "confidence" phase.


Microsoft’s Take on Kernel Access and Safe Deployment Following CrowdStrike Incident

This was discussed at some length at the MVI summit. “We face a common set of challenges in safely rolling out updates to the large Windows ecosystem, from deciding how to do measured rollouts with a diverse set of endpoints to being able to pause or rollback if needed. A core SDP principle is gradual and staged deployment of updates sent to customers,” comments Weston in a blog on the summit. “This rich discussion at the Summit will continue as a collaborative effort with our MVI partners to create a shared set of best practices that we will use as an ecosystem going forward,” he blogged. Separately, he expanded to SecurityWeek: “We discussed ways to de-conflict the various SDP approaches being used by our partners, and to bring everything together as a consensus on the principles of SDP. We want everything to be transparent, but then we want to enforce this standard as a requirement for working with Microsoft.” Agreeing and requiring a minimum set of safe deployment practices from partners is one thing; ensuring that those partners employ the agreed SDP is another. “Technical enforcement would be a challenge,” he said. “Transparency and accountability seem to be the best methodology for now.”


What Hybrid Quantum-Classic Computing Looks Like

Because classical and quantum computers have limitations, the two are being used as a hybrid solution. For example, a quantum computer is an accelerator for a classical computer and classical computers can control quantum systems. However, there are challenges. One challenge is that classical computers and quantum computers operate at different ambient temperatures, which means a classical computer can’t run in a near zero Kelvin environment, nor can a quantum computer operate in a classical environment. Therefore, separating the two is necessary. Another challenge is that quantum computers are very noisy and therefore error prone. To address that issue, Noisy Intermediate-Scale Quantum or NISQ computing emerged. The assumption is that one must just accept the errors and create variational algorithms. In this vein, one guesses what a solution looks like and then attempts to tweak the parameters of it using something like Stochastic gradient descent, which is used to train neural networks. Using a hybrid system, the process is iterative. The classical computer measures the state of the of qubits, analyzes them and sends instructions for what to do next. This is how the classical-quantum error correction iterations work at a high level.



Quote for the day:

"Facing difficult circumstances does not determine who you are. They simply bring to light who you already were." -- Chris Rollins

Daily Tech Digest - October 08, 2024

How To Secure Microservices in a Multicloud Architecture

In a microservices architecture, each service operates independently, allowing updates, maintenance and modifications without disrupting others. This isolation should extend across infrastructure layers, including databases, ensuring no service can access another’s data. Full isolation prevents attackers from moving laterally within the system. ... Sensitive data, such as passwords or personal information, should never be exposed in plain text or storage. Users and automated systems can easily access this information making it vulnerable to threats. Businesses should always remove or mask this information before storing it in any records. Practices like TLS/HTTPS or encrypting logs are not enough, since one caters to securing data in transit while the other secures data at rest. Hence, the best way is to stop storing sensitive information altogether. ... Zero trust security works on the idea that no user or device should be trusted by default, whether inside or outside the network. By using the zero trust model, businesses can make sure every user and device is constantly authenticated and authorized, no matter where they are. In microservices, this means checking every interaction between services, enforcing strict access controls and logging all actions. 


The road to Industry 5.0 is your data and AI

When Industry 5.0 emerges, we can expect to see the convergence of all that work and collected data. The next industrial revolution will be steeped in bridging the physical and the digital realms. Effectively this goes back to that human versus machine argument, but optimizing both human and machine to enhance their capabilities. AI and cloud computing will reach a harmony where workers can produce their best results, which can be replicated in processes throughout the supply chain. Industrial AI powers our lives in the back end. Industrial AI capabilities will enable power decision-making, and won't be a force for contention despite speculation. ... From the regulatory complexities of data collection and storage to varying levels of AI adoption within businesses, a successful transition into Industry 5.0 requires expert support. Costs of AI investments can snowball, so you must be strategic and targeted at improving specific areas of your business. Generic, off-the-shelf AI tools trained on irrelevant data won’t help here. To remain competitive at a global scale, companies need to invest in this technology and work with proven partners.


Why we’re teaching LLMs to forget things

Selective forgetting, something that humans are all too good at, turns out to be exceptionally difficult to recreate in machine learning models. That’s especially true for a class of AI models known as foundation models that may have picked up personal, copyrighted, or toxic information buried in their training data. ... “True unlearning tries to remove all vestiges of the unwanted information, so that when the model gets a problematic question, it simply doesn’t have the answer,” she added. “A model that has ‘unlearned’ insulting behavior no longer knows how to be toxic.” Ideally, unlearning also comes with a mathematical guarantee that the unwanted data’s influence on the model has been erased. Achieving that gold standard, however, typically involves retraining the model, which for LLMs can be prohibitively expensive. One option for unlearning without guarantees is to fine-tune the model on the unwanted data using an optimization technique known as gradient ascent to forget connections between data points. “Using gradient ascent to update the model’s weights is like running the model’s training in reverse,” said Swanand Ravindra Kadhe, a senior research scientist at IBM Research focused on unlearning. 


Will IPv6 ever replace IPv4?

The year is 2024 though, and the internet still runs on IPv4. So where did it all go wrong? IPv6 has been in migration hell for decades, with every kind of possible initiative to improve IPv6 adoption falling flat, from an official World IPv6 Day in 2011, the World IPv6 'launch' in 2012, and several US Federal government action plans in 2005, 2010, and 2020 (including mandating IPv6 readiness for government networks - a deadline initially set at 2012 and now extended to 2025). There have been numerous incentives for schools and businesses, promotional campaigns from registries and ISPs, conferences, and education campaigns. ... Another serious problem that's faced IPv6 adoption is NAT. NAT is a technology which was designed in 1994 to reduce the number of global IPv4 addresses needed. It allows devices on a private network to share a single IP address, and is present in almost all home routers (and has been for decades). NAT is the reason why your computer has an 'internal' IP address, and needs port forwarding to be accessible directly from the internet (firewall aside). NAT has allowed us to continue to grow the number of devices online well past the exhaustion point of IPv4 to a whopping 30 billion devices.


How the increasing demand for cyber insurance is changing the role of the CISO

Despite CISOs overseeing cybersecurity and the controls meant to blunt cyber risk, they have not historically been the executives who decide whether their organization buys cyber insurance. Instead, CFOs or chief risk officers typically make the call and determine what levels of protection to buy. However, CISOs are taking on larger roles — as they should — in those discussions and the decision-making process because they’re well-positioned to understand the threat landscape, the types of threats that could impact them, and how each one could impact the organization, says Paul Caron, Head of Cybersecurity, Americas at S-RM, a global corporate intelligence and cyber security consultancy. Generally speaking, CISOs are also best positioned to share the organization’s cybersecurity strategy and details of its security controls with insurance brokers or carriers, Caron says. “CISOs are the ones who can best tell their story.” And CISOs are best positioned to review the resources that a selected insurance company would possess to respond to an event and whether those resources would be the best choices. 


Many C-suite execs have lost confidence in IT, including CIOs

Many C-suite executives want the IT team to both keep the systems running and drive strategic innovation, he says, a challenging balance act. “Organizations perceive IT as struggling to meet these demands, particularly in deploying new technologies like AI, which have raised expectations among business leaders,” he says. “Challenges in managing legacy systems and ongoing talent shortages further exacerbate this issue.” In many cases, the traditional IT team has been separated from the R&D team, with the IT teams tasked with keeping the lights on, some tech leaders say. With IT and business strategies getting more intertwined, and the hard truths involved in that, the value traditionally driven by IT has shifted to product engineering and business units, says Martin Mao, CEO and co-founder of Chronosphere, a cloud observability platform. “The value is not seen in keeping the wheels on the bus,” he says. “IT is stuck in a negative spiral of cost cutting and defense mode versus innovation. There is a huge talent drain occurring from IT to the product engineering side of the house.” IT teams are often burdened with maintaining legacy systems while simultaneously asked to support new technologies such as AI, infrastructure as code, containerization, and cloud services, adds Kenny Van Alstyne


5 ways data scientists can prepare now for genAI transformation

Data scientists have traditionally developed dashboards as quick and easy ways to learn about new data sets or to help business users answer questions about their data. While data visualization and analytics platforms have added natural language querying and machine learning algorithms over the last several years, data scientists should anticipate a new wave of genAI-driven innovations. ... “With generative AI, the reliance on traditional dashboards diminishes as users can remove the noise of the analytics and get to actionable insights conversationally. Freed from ad-hoc dashboard-generation, data analysts and data scientists will concentrate on documenting organizational knowledge into semantic layers and conducting strategic analytics, creating a virtuous cycle.” Another prediction comes from Jerod Johnson, senior technology evangelist at CData, saying, “As genAI platforms become integrated into visualization tools, they enable more dynamic and interactive representations of data, allowing for real-time synthesis and scenario analysis. Over the next few years, data scientists can expect these tools to evolve to make visualizations more intuitive and insightful, even answering unasked questions for innovative discoveries.”


Can Responsible AI Live In The Cloud?

There are both benefits to be realised and pitfalls to be avoided when migrating AI to the cloud. Cloud providers offer high-spec, affordable infrastructure, often with better security arrangements than on-premises systems can provide – not to mention the capability to handle routine patching and updates. But there are a number of other factors to be mindful of, including: – Sovereignty: In many cases, it doesn’t matter where models are being trained, data transfer fees permitting. Compute in one area may be significantly cheaper than in another, but if you’re moving data to another country it’s important to understand how data will be handled there, including any differences in governmental or security process. – Sustainability: It’s also important to know how sustainable and power-efficient your AI cloud partner is, particularly if you’re transferring data to another country. Some countries have very good renewable energy mixes – but others don’t, and some datacentres are intrinsically more efficient than others. Do remember that your AI cloud provider will form part of your scope 3 emissions, so it pays to do your due diligence, particularly since AI can be very power hungry. – Suitability: The kind of data that your AI system is processing will have an impact on the kind of environment that it needs. 


The role of self-sovereign identity in enterprises

By allowing users to selectively share identity attributes, SSI mitigates the risk of overexposure of personal data. This is particularly important in industries like healthcare, financial services, and government, where stringent regulations such as GDPR, HIPAA, and CCPA dictate how personal information is managed. Passwords and traditional authentication methods have long been weak links in enterprise security, and a source of user friction. SSI can eliminate the need for passwords by enabling secure, passwordless authentication via verifiable credentials. This reduces the friction for users while maintaining high security standards. SSI can also improve customer satisfaction by simplifying secure access to services. For enterprises, SSI can also drive efficiency. By decentralizing identity verification, businesses can reduce their reliance on third-party identity providers, cutting costs and minimizing the delays associated with identity proofing processes. SSI’s interoperability across platforms and services ensures that enterprises can implement a single identity solution that works across a wide variety of use cases, from onboarding employees to authenticating customers and partners.


Reachability and Risk: Prioritizing Protection in a Complex Security Landscape

Despite the benefits, most organizations lack the tools and processes to analyze reachability across their infrastructure. Most are limited to a few common approaches with known downsides. External vulnerability scanners provide limited visibility into internal networks. Penetration testing typically focuses on external attack surfaces. And, manual analysis is incredibly time-consuming and error-prone. Achieving comprehensive reachability analysis is challenging, especially for large environments with tens of thousands of assets, as it’s difficult to compute all the states that a system might reach during operation. ... To address these challenges, organizations should leverage network digital twin technology. A sophisticated network digital twin collects L2-L7 state and configuration data across all network devices (load balancers, routers, firewalls and switches). This data is then used to create an accurate topology (on-prem and multi-cloud), calculate all possible paths within the network, analyze detailed behavioral information and make network configuration and behavior searchable and verifiable. Creating an accurate digital replica of an organization’s network infrastructure allows for automated analysis of potential attack paths and reachability between assets.



Quote for the day:

"Leadership is practices not so much in words as in attitude and in actions." -- Harold Geneen