Daily Tech Digest - November 24, 2024

AI agents are unlike any technology ever

“Reasoning” and “acting” (often implemented using the ReACT — Reasoning and Acting) framework) are key differences between AI chatbots and AI agents. But what’s really different is the “acting” part. If the main agent LLM decides that it needs more information, some kind of calculation, or something else outside the scope of the LLM itself, it can choose to solve its problem using web searches, database queries, calculations, code execution, APIs, and specialized programs. ... Since the dawn of computing, the users who used software were human beings. With agents, for the first time ever, the software is also a user who uses software. Many of the software tools agents use are regular websites and applications designed for people. They’ll look at your screen, use your mouse to point and click, switch between windows and applications, open a browser on your desktop, and surf the web — in fact, all these abilities exist in Anthropic’s “Computer Use” feature. Other tools that the agent can access are designed exclusively for agent use. Because agents can access software tools, they’re more useful, modular, and adaptable. Instead of training an LLM from scratch, or cobbling together some automation process, you can instead provide the tools the agent needs and just let the LLM figure out how to achieve the task at hand.


Live On the Edge

Why live on the edge now? Because, despite public cloud usage being ubiquitous, many deployments are ad hoc and poorly implemented. “The focus of refactoring cloud infrastructure should be on optimizing costs by eliminating redundant, overbuilt or unused cloud infrastructure,” says Gartner. ... Can edge computing also benefit the environment? Yes, according to a study by IBM Corp. “One direct way is by using edge computing to monitor protected species of wildlife inhabiting remote places,” IBM says. “Edge computing can help wildlife officials and park rangers identify and stop poaching activities, sometimes before these offenses even occur.” Another relates to energy management. “Edge computing supports the use of smart grids, which can deliver energy more efficiently and help businesses leave a smaller carbon footprint,” IBM notes. “Grid or distributed computing is where a group of machines and networks work together for a common computing purpose. Resources are utilized in an optimized manner, thus reducing the amount of waste that can occur when large quantities of power are consumed.” More significantly, edge computing can also support the remote monitoring of oil and gas assets. 


Getting started with AI agents (part 1): Capturing processes, roles and connections

An organizational chart might be a good place to start, but I would suggest starting with workflows, as the same people within an organization tend to act with different processes and people depending on workflows. There are available tools that use AI to help identify workflows, or you can build your own gen AI model. I’ve built one as a GPT which takes the description of a domain or a company name and produces an agent network definition. Because I’m utilizing a multi-agent framework built in-house at my company, the GPT produces the network as a Hocon file, but it should be clear from the generated files what the roles and responsibilities of each agent are and what other agents it is connected to. Note that we want to make sure that the agent network is a directed acyclic graph (DAG). This means that no agent can simultaneously become down-chain and up-chain to any other agent, whether directly or indirectly. This greatly reduces the chances that queries in the agent network fall into a tailspin. In the examples outlined here, all agents are LLM-based. If a node in the multi-agent organization can have zero autonomy, then that agent paired with its human counterpart, should run everything by the human. 


Preparing Project Managers for an AI-Driven Future

Right now, about 95% of AI conversations are around tools that help people do their jobs better, like ChatGPT or other large language models. For most project managers, AI can be a huge timesaver. Think of it as a tool that takes on repetitive tasks—like summarizing meeting notes or helping with scheduling—so you can focus on higher-value work. ... AI can free you up to focus on the strategic parts of your job. It’s not here to replace project managers; it’s here to make them more efficient. At this moment, a lot of people are using AI from a personal or group productivity perspective. But they are increasingly going to depend on AI as part of their team. You’re already managing more AI than you might think. And in the future, you’ll be managing a lot more. Some things will be done by people and some things will be done by machines and we need to make sure the whole thing is happening in a totally planned way. ... First thing to understand is that AI projects are data projects. If you’re used to traditional software projects, where functionality is front and center, AI is different. AI relies on data quality—"garbage in, garbage out,” as they say. Your primary focus needs to be on getting the right data in and managing the outputs, which are data as well.


Making quantum computing accessible through decentralization

A decentralized model for quantum computing sidesteps many of these challenges. Rather than relying on centralized hardware-intensive setups, it distributes computational tasks across a global network of nodes. This approach taps into existing resources—standard GPUs, laptops, and servers—without needing the extreme cooling or complex facilities required by traditional quantum hardware. Instead, this decentralized network forms a collective computational resource capable of solving real-world problems at scale using quantum techniques. This decentralized Quantum-as-a-Service approach emulates the behaviors of quantum systems without strict hardware demands. By decentralizing the computational load, these networks achieve a comparable level of efficiency and speed to traditional quantum systems—without the same logistical and financial constraints. ... Decentralized quantum computing represents a transformative shift in how we approach advanced problem-solving. By leveraging accessible infrastructure and distributing tasks across a global network, powerful computing is brought within reach of many who were previously excluded. 


Data Security vs. Cyber Security – Why the Difference Matters

Cybersecurity is the practice of safeguarding digital systems, networks, and programs from attacks that aim to steal, alter, or destroy sensitive data, extort money through ransomware, or disrupt business operations. Despite a substantial $183 billion investment in traditional security measures in 2023 and projections indicating a 14% increase in these security budgets for 2024, data breaches surged by 78%, reaching a record high. ... Data is the most valuable commodity of a company, yet we don’t see resource allocation and time investment in data security reflecting this importance. Data security involves protecting the data itself. Once protected, the data can travel anywhere and remain protected. Having the fine granularity to safeguard the data allows you to grant users the minimum access necessary for their job functions. When someone does need to use the data, they must be authorized to do so. ... Zero trust data protection techniques significantly enhance data security posture and business value. The first step to improving security and data value is identifying the most at-risk yet least accessed data. It’s essential to assess the need for clear-text visibility of high-risk data across people, processes, and systems and to consider the business impact of minimizing this risk, including factors like regulatory compliance, reputation, and insurance.


Is Your Phone Spying On You? How to Check and What to Do

“For years, people have noticed advertisements for products they recently discussed in conversation — even without searching for them online — suddenly appear on their devices. While many dismissed this as a coincidence or attributed it to targeted advertising based on online searches, it turns out there’s more to the story. According to a report by 404 Media, a marketing firm has confirmed that smartphones are not just tracking users' online activity — they are also listening to what you say out loud, near your phone. “Smartphones might indeed be listening to our conversations, thanks to a technology known as “active listening.” This unsettling discovery comes after a marketing firm, whose clients include tech giants like Google and Facebook, admitted to using software that monitors users’ conversations through the microphones of their devices. This admission has raised serious questions about privacy, user consent, and the ethics of targeted advertising. … For better or for worse, there is generally nothing illegal about using audio information to target advertising. While it is obviously illegal to spy on someone without their consent, most phone users have given their permission for this practice without knowing, according to legal experts.


CNCF Brings Jaegar and OpenTelemetry Closer Together to Improve Observability

In the wake of adding support for OpenTelemetry, the project is now working on revamping the user interface for Jaegar to make that data more easily discoverable in addition to normalizing dependency views. In addition, the project is moving toward adding support for the Storage v2 interface to consume OpenTelemetry data natively along with adding support for ClickHouse as the official storage backend for tracing data. Finally, the project intends to add support for Helm Charts and an Operator that will make deploying Jaegar on Kubernetes clusters simpler. ... The challenge, of course, has been first finding the funding for observability initiatives, followed then by the issues that arise as DevOps teams move to consolidate tooling. Many software engineers naturally become attached to a particular monitoring tool. Convincing them to swap it out for another platform requires effort and, most importantly, training. Each organization will individually decide to what degree they may want to drive tool consolidation, however, in many cases, the cost of acquiring an observability platform assumes savings will be generated by eliminating the need for other tools.


Zero Days Top Cybersecurity Agencies' Most-Exploited List

The prevalence of zero-day vulnerabilities on this year's list is a reminder that attackers regularly seek ways of exploiting widely used types of software and hardware before vendors identify the underlying flaw and fix it. The joint security advisory also details guidance prepared by CISA and the National Institute of Standards and Technology designed to improve organizations' cyber resilience to better combat all types of cybersecurity threats. Specific recommendations also include regularly using automated asset discovery to find all of the hardware, software, systems and services inside an IT organization's estate and locking them down as much as possible; prepping and testing incident response plans; and keeping regular, secure backups of copies which get stored off-network to facilitate rapid repair and restoration of systems. The guidance also recommends implementing zero trust network architecture, using phishing-resistant multifactor authentication as an identity and access management control, enforcing least-privileged access, and reducing the number of third-party applications and unique types of builds used.


Achieving Optimal Outcomes in Security Through Platformization

Platformization unifies multiple solutions and services into a single architecture with a shared data store and streamlined management. With native integrations, each component becomes more powerful than standalone products. This approach helps increase productivity, simplify operations, and extract the most value from data, all leading to better security outcomes and greater efficiency. ... Using the platform approach should never entail giving up security efficacy for the sake of vendor consolidation or simplified management. If there is a corresponding set of point products in a given area, the minimum bar by which the “platform” component must be measured is the very best of those individual tools. Flexibility and scalability are important. A platform needs to empower your company to gradually grow into using it. A total “rip and replace” of multiple security tools at once is far more complex than most enterprises are willing to attempt. It’s even harder when you factor in the differing replacement cycles of existing solutions. You need the option to adopt the platform piece by piece or all at once – whichever suits your organization best – while retaining the ability to cover all your security bases.



Quote for the day:

“Opportunities don’t happen, you create them.” -- Chris Grosser

Daily Tech Digest - November 23, 2024

AI Regulation Readiness: A Guide for Businesses

The first thing to note about AI compliance today is that few laws and other regulations are currently on the books that impact the way businesses use AI. Most regulations designed specifically for AI remain in draft form. That said, there are a host of other regulations — like the General Data Protection Regulation (GDPR), the California Privacy Rights Act (CPRA), and the Personal Information Protection and Electronic Documents Act (PIPEDA) — that have important implications for AI. These compliance laws were written before the emergence of modern generative AI technology placed AI onto the radar screens of businesses (and regulators) everywhere, and they mention AI sparingly if at all. But these laws do impose strict requirements related to data privacy and security. Since AI and data go hand-in-hand, you can't deploy AI in a compliant way without ensuring that you manage and secure data as current regulations require. This is why businesses shouldn't think of AI as an anything-goes space due to the lack of regulations focused on AI specifically. Effectively, AI regulations already exist in the form of data privacy rules. 


Cloud vs. On-Prem AI Accelerators: Choosing the Best Fit for Your AI Workloads

Like most types of hardware, AI accelerators can run either on-prem or in the cloud. An on-prem accelerator is one that you install in servers you manage yourself. This requires you to purchase the accelerator and a server capable of hosting it, set them up, and manage them on an ongoing basis. A cloud-based accelerator is one that a cloud vendor makes available to customers over the internet using an IaaS model. Typically, to access a cloud-based accelerator, you'd choose a cloud server instance designed for AI. For example, Amazon offers EC2 cloud server instances that feature its Trainium AI accelerator chip. Google Cloud offers Tensor Processing Units (TPUs), another type of AI accelerator, as one of its cloud server options. ... Some types of AI accelerators are only available through the cloud. For instance, you can't purchase the AI chips developed by Amazon and Google for use in your own servers. You have to use cloud services to access them. ... Like most cloud-based solutions, cloud AI hardware is very scalable. You can easily add more AI server instances if you need more processing power. This isn't the case with on-prem AI hardware, which is costly and complicated to scale up.


Platform Engineering Is The New DevOps

Platform engineering has provided a useful escape hatch at just the right time. Its popularity has grown strongly, with a well-attended inaugural platform engineering day at KubeCon Paris in early 2024 confirming attendee interest. A platform engineering day was part of the KubeCon NA schedule this past week and will also be included at next year’s KubeCon in London. “I haven't seen platform engineering pushed top down from a C-suite. I've seen a lot of guerilla stuff with platform and ops teams just basically going out and doing a skunkworks thing and sneaking it into production and then making a value case and growing from there,” said Keith Babo, VP of product and marketing at Solo.io. ... “If anyone ever asks me what’s my definition of platform engineering, I tend to think of it as DevOps at scale. It’s how DevOps scales,” says Kennedy. The focus has shifted away from building cloud native technology, done by developers, to using cloud native technology, which is largely the realm of operations. That platform engineering should start to take over from DevOps in this ecosystem may not be surprising, but it does highlight important structural shifts.


Artificial Intelligence and Its Ascendancy in Global Power Dynamics

According to the OECD, AI is defined as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions that influence real or virtual environments.” The vision for Responsible AI is clear: establish global auditing standards, ensure transparency, and protect privacy through secure data governance. Yet, achieving Responsible AI requires more than compliance checklists; it demands proactive governance. For example, the EU’s AI Act takes a hardline approach to regulating high-risk applications like real-time biometric surveillance and automated hiring processes, whereas the U.S., under President Biden’s Executive Order on Safe, Secure, and Trustworthy AI, emphasizes guidelines over strict enforcement. ... AI is becoming the lynchpin of cybersecurity and national security strategies. State-backed actors from China, Iran, and North Korea are weaponizing AI to conduct sophisticated cyber-attacks on critical infrastructure. The deployment of Generative Adversarial Networks (GANs) and WormGPT is automating cyber operations at scale, making traditional defenses increasingly obsolete. In this context, a cohesive, enforceable framework for AI governance is no longer optional but essential. 


Why voice biometrics is a must-have for modern businesses

Voice biometrics are making waves across multiple industries. Here’s a look at how different sectors can leverage this technology for a competitive edge:Financial services: Banks and financial institutions are actively integrating voice verification into call centers, allowing customers to authenticate themselves with their voice, eliminating the need for secret words or pin codes. This strengthens security, reduces time and cost per customer call and enhances the customer experience. Automotive: With the rise of connected vehicles, voice is already heavily used with integrated digital assistants that provide handsfree access to in-car services like navigation, settings and communications. Adding voice recognition allows such in car services to be personalized for the driver and opens the possibilities of more enhancements such as commerce. Automotive brands can integrate voice recognition for offering seamless access to new services like parking, fueling, charging, curbside pick-up by utilizing in-car payments that boost security, convenience and customer satisfaction. Healthcare: Healthcare providers can use voice authentication to securely verify patient identities over the phone or via telemedicine. This ensures that sensitive information remains protected, while providing a seamless experience for patients who may need hands-free options.


When and Where to Rate-Limit: Strategies for Hybrid and Legacy Architectures

While rate-limiting is an essential tool for protecting your system from traffic overloads, applying it directly at the application layer — whether for microservices or legacy applications — is often a suboptimal strategy. ... Legacy systems operate differently. They often rely on vertical scaling and have limited flexibility to handle increased loads. While it might seem logical to apply rate-limiting directly to protect fragile legacy systems, this approach usually falls short. The main issue with rate-limiting at the legacy application layer is that it’s reactive. By the time rate-limiting kicks in, the system might already be overloaded. Legacy systems, lacking the scalability and elasticity of microservices, are more prone to total failure under high load, and rate-limiting at the application level can’t stop this once the traffic surge has already reached its peak. ... Rate-limiting should be handled further upstream rather than deep in the application layer, where it either conflicts with scalability (in microservices) or arrives too late to prevent failures. This leads us to the API gateway, the strategic point in the architecture where traffic control is most effective. 


Survey Surprise: Quantum Now in Action at Almost One-Third of Sites

The use cases for quantum — scientific research, complex simulations — have been documented for a number of years. However, with the arrival of artificial intelligence, particularly generative AI, on the scene, quantum technology may start finding more mainstream business use cases. In a separate report out of Sogeti (a division of Capgemini Group), Akhterul Mustafa calls an impending mashup of generative AI and quantum computing as the “tech world’s version of a dream team, not just changing the game but also pushing the boundaries of what we thought was possible.” ... The convergence of generative AI and quantum computing beings “some pretty epic perks,” Mustafa states. For example, it enables the supercharging of AI models. “Training AI models is a beastly task that needs tons of computing power. Enter quantum computers, which can zip through complex calculations, potentially making AI smarter and faster.” In addition, “quantum computers can sift through massive datasets in a blink. Pair that with generative AI’s knack for cooking up innovative solutions, and you’ve got a recipe for solving brain-bending problems in areas like health, environment, and beyond.”


How Continuous Threat Exposure Management (CTEM) Helps Your Business

A CTEM framework typically includes five phases: identification, prioritization, mitigation, validation, and reporting and improvement. In the first phase, systems are continuously monitored to identify new or emerging vulnerabilities and potential attack vectors. This continuous monitoring is essential to the vulnerability management lifecycle. Identified vulnerabilities are then assessed based on their potential impact on critical assets and business operations. In the mitigation phase, action is taken to defend against high-risk vulnerabilities by applying patches, reconfiguring systems or adjusting security controls. The validation stage focuses on testing defenses to ensure vulnerabilities are properly mitigated and the security posture remains strong. In the final phase of reporting and improvement, IT leaders gain access to security metrics and improved defense routes, based on lessons learned from incident response. ... While both CTEM and vulnerability management aim to identify and remediate security weaknesses, they differ in scope and execution. Vulnerability management is more about targeted and periodic identification of vulnerabilities within an organization based on a set scan window.


DevOps in the Cloud: Leveraging Cloud Services for Optimal DevOps Practices

A well-designed DevOps transformation strategy can help organizations deliver software products and their services quickly and reliably while improving the overall efficiency of their development and delivery processes. ... Cloud platforms facilitate the immediate provisioning of infrastructure components, including servers, storage units, and databases. This helps teams swiftly initiate new development and testing environments, hastening the software development lifecycle. Companies can see a significant decrease in infrastructure provisioning time by integrating cloud services. ... DevOps helps development and operations teams work together. Cloud platforms provide a central place for storing code, configurations, and important files so everyone can be on the same page. Additionally, cloud-based communication and collaboration tools streamline communication and break down silos between teams. ... Cloud services provide a pay-as-you-go system, so there is no need for a large upfront investment in hardware. This way, companies can scale their infrastructure according to their requirements, saving a lot of money. 


Reinforcement learning algorithm provides an efficient way to train more reliable AI agents

To boost the reliability of reinforcement learning models for complex tasks with variability, MIT researchers have introduced a more efficient algorithm for training them. The findings are published on the arXiv preprint server. The algorithm strategically selects the best tasks for training an AI agent so it can effectively perform all tasks in a collection of related tasks. In the case of traffic signal control, each task could be one intersection in a task space that includes all intersections in the city. By focusing on a smaller number of intersections that contribute the most to the algorithm's overall effectiveness, this method maximizes performance while keeping the training cost low. The researchers found that their technique was between five and 50 times more efficient than standard approaches on an array of simulated tasks. This gain in efficiency helps the algorithm learn a better solution in a faster manner, ultimately improving the performance of the AI agent. "We were able to see incredible performance improvements, with a very simple algorithm, by thinking outside the box. An algorithm that is not very complicated stands a better chance of being adopted by the community because it is easier to implement and easier for others to understand,"



Quote for the day:

"Too many of us are not living our dreams because we are living our fears." -- Les Brown

Daily Tech Digest - November 22, 2024

AI agents are coming to work — here’s what businesses need to know

Defining exactly what an agent is can be tricky, however: LLM-based agents are an emerging technology, and there’s a level of variance in the sophistication of tools labelled as “agents,” as well as how related terms are applied by vendors and media. And as with the first wave of generative AI (genAI) tools, there are question marks around how businesses will use the technology. ... With so many tools in development or coming to the market, there’s a certain amount of confusion among businesses that are struggling to keep pace. “The vendors are announcing all of these different agents, and you can imagine what it’s like for the buyers: instead of ‘The Russians are coming, the Russians are coming,’ it’s ‘the agents are coming, the agents are coming,’” said Loomis. “They’re being bombarded by all of these new offerings, all of this new terminology, and all of these promises of productivity.” Software vendors also offer varying interpretations of the term “agent” at this stage, and tools coming to market exhibit a broad spectrum of complexity and autonomy. ... Many of the agent builder tools coming to business and work apps require little or no expertise. This accessibility means a wide range of workers could manage and coordinate their own agents.


The limits of AI-based deepfake detection

In terms of inference-based detection, ground truth is never known and assumed as such, so detection is based on a one to ninety-nine percentage that the content in question is or is not likely manipulated. Inference-based platform needs no buy-in from platforms, but instead needs robust models trained on a wide variety of deepfaking techniques and technologies in various use cases and circumstances. To stay ahead of emerging threat vectors and groundbreaking new models, those making an inference-based solution can look to emerging gen AI research to implement such methods into detection models as or before such research becomes productized. ... Greater public awareness and education will always be of immense importance, especially in places where content is consumed that could potentially be deepfaked or artificially manipulated. Yet deepfakes are getting so convincing, so realistic that even storied researchers now have a hard time differentiating real from fake simply by looking at or listening to a media file. This is how advanced deepfakes have become, and they will only continue to grow in believability and realism. This is why it is crucial to implement deepfake detection solutions in the aforementioned content platforms or anywhere deepfakes can and do exist. 


Quantum error correction research yields unexpected quantum gravity insights

So far, scientists have not found a general way of differentiating trivial and non-trivial AQEC codes. However, this blurry boundary motivated Liu, Daniel Gottesman of the University of Maryland, US; Jinmin Yi of Canada’s Perimeter Institute for Theoretical Physics; and Weicheng Ye at the University of British Columbia, Canada, to develop a framework for doing so. To this end, the team established a crucial parameter called subsystem variance. This parameter describes the fluctuation of subsystems of states within the code space, and, as the team discovered, links the effectiveness of AQEC codes to a property known as quantum circuit complexity. ... The researchers also discovered that their new AQEC theory carries implications beyond quantum computing. Notably, they found that the dividing line between trivial and non-trivial AQEC codes also arises as a universal “threshold” in other physical scenarios – suggesting that this boundary is not arbitrary but rooted in elementary laws of nature. One such scenario is the study of topological order in condensed matter physics. Topologically ordered systems are described by entanglement conditions and their associated code properties. 


Towards greener data centers: A map for tech leaders

The transformation towards sustainability can be complex, involving key decisions about data center infrastructure. Staying on-premises offers control over infrastructure and data but poses questions about energy sourcing. Shifting to hybrid or cloud models can leverage the innovations and efficiencies of hyperscalers, particularly regarding power management and green energy procurement. One of the most significant architectural advancements in this context is hyperconverged infrastructure (HCI). As we know, traditionally data centers operate using a three-tier architecture comprising separate servers, storage, and network equipment. This model, though reliable, has clear limitations in terms of energy consumption and cooling efficiency. By merging the server and storage layers, HCI reduces both the power demands and the associated cooling requirements. ... The drive to create more efficient and environmentally conscious data centers is not just about cost control; it’s also about meeting the expectations of regulators, customers, and stakeholders. As AI and other compute-intensive technologies continue to proliferate, organizations must reassess their infrastructure strategies, not just to meet sustainability goals but to remain competitive.


What is a data architect? Skills, salaries, and how to become a data framework master

The data architect and data engineer roles are closely related. In some ways, the data architect is an advanced data engineer. Data architects and data engineers work together to visualize and build the enterprise data management framework. The data architect is responsible to visualize the blueprint of the complete framework that data engineers then build. ... Data architect is an evolving role and there’s no industry-standard certification or training program for data architects. Typically, data architects learn on the job as data engineers, data scientists, or solutions architects, and work their way to data architect with years of experience in data design, data management, and data storage work. ... Data architects must have the ability to design comprehensive data models that reflect complex business scenarios. They must be proficient in conceptual, logical, and physical model creation. This is the core skill of the data architect and the most requested skill in data architect job descriptions. This often includes SQL development and database administration. ... With regulations continuing to evolve, data architects must ensure their organization’s data management practices meet stringent legal and ethical standards. They need skills to create frameworks that maintain data quality, security, and privacy.


AI – Implementing the Right Technology for the Right Use Case

Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization. This is a theme that also emerged as cybersecurity automation matured – the need to identify the right use case for the technology, rather than try to apply it across the board.. ... That said, AI is and will continue to be a useful tool. In today’s economic climate, as businesses adapt to a new normal of continuous change, AI—alongside automation—can be a scale function for cybersecurity teams, enabling them to pivot and scale to defend against evermore diverse attacks. In fact, our recent survey of 750 cybersecurity professionals found that 58% of organizations are already using AI in cybersecurity to some extent. However, we do anticipate that AI in cybersecurity will pass through the same adoption cycle and challenges experienced by “the cloud” and automation, including trust and technical deployment issues, before it becomes truly productive. 


A GRC framework for securing generative AI

Understanding the three broad categories of AI applications is just the beginning. To effectively manage risk and governance, further classification is essential. By evaluating key characteristics such as the provider, hosting location, data flow, model type, and specificity, enterprises can build a more nuanced approach to securing AI interactions. A crucial factor in this deeper classification is the provider of the AI model. ... As AI technology advances, it brings both transformative opportunities and unprecedented risks. For enterprises, the challenge is no longer whether to adopt AI, but how to govern AI responsibly, balancing innovation against security, privacy, and regulatory compliance. By systematically categorizing generative AI applications—evaluating the provider, hosting environment, data flow, and industry specificity—organizations can build a tailored governance framework that strengthens their defenses against AI-related vulnerabilities. This structured approach enables enterprises to anticipate risks, enforce robust access controls, protect sensitive data, and maintain regulatory compliance across global jurisdictions. The future of enterprise AI is about more than just deploying the latest models; it’s about embedding AI governance deeply into the fabric of the organization.


Business Continuity Depends on the Intersection of Security and Resilience

The focus of security, or the goal of security, or the intended purpose of security in its most natural and traditional form, right before we start to apply it to other things, is to prevent bad things from happening, or protect the organization or protect assets. It doesn't necessarily have to be technology that does it. This is where your policies and procedures come into place. Letting users know what acceptable use policies are or what things are accepted when leveraging corporate resources. From a technology perspective, it's your firewalls, antivirus, intrusion detection systems and things of that nature. So, this is where we focus on good cyber hygiene. We're controlling the controllables and making sure that we're taking care of the things that are within our control. What about resilience? This one is near and dear to my heart. That's because I've been in tech and security for almost 25 years, and I've kind of gone through this evolution of what I think is important. We're trained as practitioners in this industry to believe that the goal is to reduce risk. We must reduce or mitigate cyber risk, or we can make other risk decisions. We can avoid it, we can accept it, or we can transfer it. But practically speaking, when we show up to work every day and we're doing something active, we're reducing risk.


How to stop data mesh turning into a data mess

Realistically, expecting employees to remember to follow data quality and compliance guidelines is neither fair nor enforceable. Adherence must be implemented without frustrating users, and become an integral part of the project delivery process. Unlikely as this sounds, a computational governance platform can impose the necessary standards as ‘guardrails’ while also accelerating the time to market of products. Sitting above an organisation’s existing range of data enablement and management tools, a computational governance platform ensures every project follows pre-determined policies, for quality, compliance, security, and architecture. Highly customisable standards can be set at global or local levels, whatever is required. ... While this might seem restrictive, there are many benefits from having a standardised way of working. To streamline processes, intelligent automated templates help data practitioners quickly initiate new projects and search for relevant data. The platform can oversee the deployment of data products by checking their compliance and taking care of the resource provisioning, freeing the teams from the burden of coping with infrastructure technicalities (on cloud or on-prem) and certifying data product compliance at the same time, before data products enter production. 


The SEC Fines Four SolarWinds Breach Victims

Companies should ensure the cyber and data security information they share within their organizations is consistent with what they share with government agencies, shareholders and the public, according to Buchanan Ingersoll & Rooney’s Sanger. This applies to their security posture prior to a breach, as well as their responses afterward. “Consistent messaging is difficult to manage given that dozens, hundreds or thousands could be responsible for an organization’s cybersecurity. Investigators will always be able to find a dissenting or more pessimistic outlook among the voices involved,” says Sanger. “If there is a credible argument that circumstances are or were worse than what the organization shares publicly, leadership should openly acknowledge it and take steps to justify the official perspective.” Corporate cybersecurity breach reporting is still relatively uncharted territory, however. “Even business leaders who intend to act with complete transparency can make inadvertent mistakes or communicate poorly, particularly because the language used to discuss cybersecurity is still developing and differs between communities,” says Sanger. “It’s noteworthy that the SEC framed each penalized company as having, ‘negligently minimized its cybersecurity incident in its public disclosures.’ 



Quote for the day:

"Perfection is not attainable, but if we chase perfection we can catch excellence." -- Vince Lombardi

Daily Tech Digest - November 21, 2024

Building Resilient Cloud Architectures for Post-Disaster IT Recovery

A resilient cloud architecture is designed to maintain functionality and service quality during disruptive events. These architectures ensure that critical business applications remain accessible, data remains secure, and recovery times are minimized, allowing organizations to maintain operations even under adverse conditions. To achieve resilience, cloud architectures must be built with redundancy, reliability, and scalability in mind. This involves a combination of technologies, strategies, and architectural patterns that, when applied collect ... Cloud-based DRaaS solutions allow organizations to recover critical workloads quickly by replicating environments in a secondary cloud region. This ensures that essential services can be restored promptly in the event of a disruption. Automated backups, on the other hand, ensure that all extracted data is continually saved and stored in a secure environment. Using regular snapshots can also provide rapid restoration points, giving teams the ability to revert systems to a pre-disaster state efficiently. ... Infrastructure as code (IaC) allows for the automated setup and configuration of cloud resources, providing a faster recovery process after an incident. 


Agile Security Sprints: Baking Security into the SDLC

Making agile security sprints effective requires organizations to embrace security as a continuous, collaborative effort. The first step? Integrating security tasks into the product backlog right alongside functional requirements. This approach ensures that security considerations are tackled within the same sprint, allowing teams to address potential vulnerabilities as they arise — not after the fact when they're harder and more expensive to fix. ... By addressing security iteratively, teams can continuously improve their security posture, reducing the risk of vulnerabilities becoming unmanageable. Catching security issues early in the development lifecycle minimizes delays, enabling faster, more secure releases, which is critical in a competitive development landscape. The emphasis on collaboration between development and security teams breaks down silos, fostering a culture of shared responsibility and enhancing the overall security-consciousness of the organization. Quickly addressing security issues is often far more cost-effective than dealing with them post-deployment, making agile security sprints a necessary choice for organizations looking to balance speed with security.


The new paradigm: Architecting the data stack for AI agents

With the semantic layer and historical data-based reinforcement loop in place, organizations can power strong agentic AI systems. However, it’s important to note that building a data stack this way does not mean downplaying the usual best practices. This essentially means that the platform being used should ingest and process data in real-time from all major sources, have systems in place for ensuring the quality/richness of the data and then have robust access, governance and security policies in place to ensure responsible agent use. “Governance, access control, and data quality actually become more important in the age of AI agents. The tools to determine what services have access to what data become the method for ensuring that AI systems behave in compliance with the rules of data privacy. Data quality, meanwhile, determines how well an agent can perform a task,” Naveen Rao, VP of AI at Databricks, told VentureBeat. ... “No agent, no matter how high the quality or impressive the results, should see the light of day if the developers don’t have confidence that only the right people can access the right information/AI capability. This is why we started with the governance layer with Unity Catalog and have built our AI stack on top of that,” Rao emphasized.


Enhancing visibility for better security in multi-cloud and hybrid environments

The number one challenge for infrastructure and cloud security teams is visibility into their overall risk–especially in complex environments like cloud, hybrid cloud, containers, and Kubernetes. Kubernetes is now the tool of choice for orchestrating and running microservices in containers, but it has also been one of the last areas to catch speed from a security perspective, leaving many security teams feeling caught on their heels. This is true even if they have deployed admission control or have other container security measures in place. Teams need a security tool in place that can show them who is accessing their workloads and what is happening in them at any given moment, as these environments have an ephemeral nature to them. A lot of legacy tooling just has not kept up with this demand. The best visibility is achieved with tooling that allows for real-time visibility and real-time detection, not point-in-time snapshotting, which does not keep up with the ever-changing nature of modern cloud environments. To achieve better visibility in the cloud, automate security monitoring and alerting to reduce manual effort and ensure comprehensive coverage. Centralize security data using dashboards or log aggregation tools to consolidate insights from across your cloud platforms.


How Augmented Reality is Shaping EV Development and Design

Traditionally, prototyping has been a costly and time-consuming stage in vehicle development, often requiring multiple physical models and extensive trial and error. AR is disrupting this process by enabling engineers to create and test virtual prototypes before building physical ones. Through immersive visualizations, teams can virtually assess design aspects like fit, function, and aesthetics, streamlining modifications and significantly shortening development cycles. ... One of the key shifts in EV manufacturing is the emphasis on consumer-centric design. EV buyers today expect not just efficiency but also vehicles that reflect their lifestyle choices, from customizable interiors to cutting-edge tech features. AR offers manufacturers a way to directly engage consumers in the design process, offering a virtual showroom experience that enhances the customization journey. ... AR-assisted training is one frontier seeing a lot of adoption. By removing humans from dangerous scenarios while still allowing them to interact with those same scenarios, companies can increase safety while still offering practical training. In one example from Volvo, augmented reality is allowing first responders to assess damage on EV vehicles and proceed with caution.


Digital twins: The key to unlocking end-to-end supply chain growth

Digital twins can be used to model the interaction between physical and digital processes all along the supply chain—from product ideation and manufacturing to warehousing and distribution, from in-store or online purchases to shipping and returns. Thus, digital twins paint a clear picture of an optimal end-to-end supply chain process. What’s more, paired with today’s advances in predictive AI, digital twins can become both predictive and prescriptive. They can predict future scenarios to suggest areas for improvement or growth, ultimately leading to a self-monitoring and self-healing supply chain. In other words, digital twins empower the switch from heuristic-based supply chain management to dynamic and granular optimization, providing a 360-degree view of value and performance leakage. To understand how a self-healing supply chain might work in practice, let’s look at one example: using digital twins, a retailer sets dynamic SKU-level safety stock targets for each fulfillment center that dynamically evolve with localized and seasonal demand patterns. Moreover, this granular optimization is applied not just to inventory management but also to every part of the end-to-end supply chain—from procurement and product design to manufacturing and demand forecasting. 


Illegal Crypto Mining: How Businesses Can Prevent Themselves From Being ‘Cryptojacked’

Business leaders might believe that illegal crypto mining programs pose no risks to their operations. Considering the number of resources most businesses dedicate to cybersecurity, it might seem like a low priority in comparison to other risks. However, the successful deployment of malicious crypto mining software can lead to even more risks for businesses, putting their cybersecurity posture in jeopardy. Malware and other forms of malicious software can drain computing resources, cutting the life expectancy of computer hardware. This can decrease the long-term performance and productivity of all infected computers and devices. Additionally, the large amount of energy required to support the high computing power of crypto mining can drain electricity across the organization. But one of the most severe risks associated with malicious crypto mining software is that it can include other code that exploits existing vulnerabilities. ... While powerful cybersecurity tools are certainly important, there’s no single solution to combat illegal crypto mining. But there are different strategies that business leaders can implement to reduce the likelihood of a breach, and mitigating human error is among the most important. 


10 Most Impactful PAM Use Cases for Enhancing Organizational Security

Security extends beyond internal employees as collaborations with third parties also introduce vulnerabilities. PAM solutions allow you to provide vendors with time-limited, task-specific access to your systems and monitor their activity in real time. With PAM, you can also promptly revoke third-party access when a project is completed, ensuring no dormant accounts remain unattended. Suppose you engage third-party administrators to manage your database. In this case, PAM enables you to restrict their access based on a "need-to-know" basis, track their activities within your systems, and automatically remove their access once they complete the job. ... Reused or weak passwords are easy targets for attackers. Relying on manual password management adds another layer of risk, as it is both tedious and prone to human error. That's where PAM solutions with password management capabilities can make a difference. Such solutions can help you secure passwords throughout their entire lifecycle — from creation and storage to automatic rotation. By handling credentials with such PAM solutions and setting permissions according to user roles, you can make sure all the passwords are accessible only to authorized users. 


The Information Value Chain as a Framework for Tackling Disinformation

The information value chain has three stages: production, distribution, and consumption. Claire Wardle proposed an early version of this framework in 2017. Since then, scholars have suggested tackling disinformation through an economics lens. Using this approach, we can understand production as supply, consumption as demand, and distribution as a marketplace. In so doing, we can single out key stakeholders at each stage and determine how best to engage them to combat disinformation. By seeing disinformation as a commodity, we can better identify and address the underlying motivations ... When it comes to the disinformation marketplace, disinformation experts mostly agree it is appropriate to point the finger at Big Tech. Profit-driven social media platforms have understood for years that our attention is the ultimate gold mine and that inflammatory content is what attracts the most attention. There is, therefore, a direct correlation between how much disinformation circulates on a platform and how much money it makes from advertising. ... To tackle disinformation, we must think like economists, not just like fact-checkers, technologists, or investigators. We must understand the disinformation value chain and identify the actors and their incentives, obstacles, and motivations at each stage.


Why do developers love clean code but hate writing documentation?

In fast-paced development environments, particularly those adopting Agile methodologies, maintaining up-to-date documentation can be challenging. Developers often deprioritize documentation due to tight deadlines and a focus on delivering working code. This leads to informal, hard-to-understand documentation that quickly becomes outdated as the software evolves. Another significant issue is that documentation is frequently viewed as unnecessary overhead. Developers may believe that code should be self-explanatory or that documentation slows down the development process. ... To prevent documentation from becoming a second-class citizen in the software development lifecycle, Ferri-Beneditti argues that documentation needs to be observable, something that can be measured against the KPIs and goals developers and their managers often use when delivering projects. ... By offloading the burden of documentation creation onto AI, developers are free to stay in their flow state, focusing on the tasks they enjoy—building and problem-solving—while still ensuring that the documentation remains comprehensive and up-to-date. Perhaps most importantly, this synergy between GenAI and human developers does not remove human oversight. 



Quote for the day:

"The harder you work for something, the greater you'll feel when you achieve it." -- Unknown

Daily Tech Digest - November 20, 2024

5 Steps To Cross the Operational Chasm in Incident Management

A siloed approach to incident management slows down decision-making and harms cross-team communication during incidents. Instead, organizations must cultivate a cross-functional culture where all team members are able to collaborate seamlessly. Cross-functional collaboration ensures that incident response plans are comprehensive and account for the insights and expertise contained within specific teams. This communication can be expedited with the support of AI tools to summarize information and draft messages, as well as the use of automation for sharing regular updates. ... An important step in developing a proactive incident management strategy is conducting post-incident reviews. When incidents are resolved, teams are often so busy that they are forced to move on without examining the contributing factors or identifying where processes can be improved. Conducting blameless reviews after significant incidents — and ideally every incident — is crucial for continuously and iteratively improving the systems in which incidents occur. This should cover both the technological and human aspects. Reviews must be thorough and uncover process flaws, training gaps or system vulnerabilities to improve incident management.


How to transform your architecture review board

A modernized approach to architecture review boards should start with establishing a partnership, building trust, and seeking collaboration between business leaders, devops teams, and compliance functions. Everyone in the organization uses technology, and many leverage platforms that extend the boundaries of architecture. Winbush suggests that devops teams must also extend their collaboration to include enterprise architects and review boards. “Don’t see ARBs as roadblocks, and treat them as a trusted team that provides much-needed insight to protect the team and the business,” he suggests. ... “Architectural review boards remain important in agile environments but must evolve beyond manual processes, such as interviews with practitioners and conventional tools that hinder engineering velocity,” says Moti Rafalin, CEO and co-founder of vFunction. “To improve development and support innovation, ARBs should embrace AI-driven tools to visualize, document, and analyze architecture in real-time, streamline routine tasks, and govern app development to reduce complexity.” ... “Architectural observability and governance represent a paradigm shift, enabling proactive management of architecture and allowing architects to set guardrails for development to prevent microservices sprawl and resulting complexity,” adds Rafalin.


Business Internet Security: Everything You Need to Consider

Each device on your business’s network, from computers to mobile phones, represents a potential point of entry for hackers. Treat connected devices as a door to your Wi-Fi networks, ensuring each one is secure enough to protect the entire structure. ... Software updates often include vital security patches that address identified vulnerabilities. Delaying updates on your security software is like ignoring a leaky roof; if left unattended, it will only get worse. Patch management and regularly updating all software on all your devices, including antivirus software and operating systems, will minimize the risk of exploitation. ... With cyber threats continuing to evolve and become more sophisticated, businesses can never be complacent about internet security and protecting their private network and data. Taking proactive steps toward securing your digital infrastructure and safeguarding sensitive data is a critical business decision. Prioritizing robust internet security measures safeguards your small business and ensures you’re well-equipped to face whatever kind of threat may come your way. While implementing these security measures may seem daunting, partnering with the right internet service provider like Optimum can give you a head start on your cybersecurity journey.


How Google Cloud’s Information Security Chief Is Preparing For AI Attackers

To build out his team, Venables added key veterans of the security industry, including Taylor Lehmann, who led security engineering teams for the Americas at Amazon Web Services, and MK Palmore, a former FBI agent and field security officer at Palo Alto Networks. “You need to have folks on board who understand that security narrative and can go toe-to-toe and explain it to CIOs and CISOs,” Palmore told Forbes. “Our team specializes in having those conversations, those workshops, those direct interactions with customers.” ... Generally, a “CISO is going to meet with a very small subset of their clients,” said Charlie Winckless, senior director analyst on Gartner's Digital Workplace Security team. “But the ability to generate guidance on using Google Cloud from the office of the CISO, and make that widely available, is incredibly important.” Google is trying to do just that. Last summer, Venables co-led the development of Google’s Secure AI Framework, or SAIF, a set of guidelines and best practices for security professionals to safeguard their AI initiatives. It’s based on six core principles, including making sure organizations have automated defense tools to keep pace with new and existing security threats, and putting policies in place that make it faster for companies to get user feedback on newly deployed AI tools.


11 ways to ensure IT-business alignment

A key way to facilitate alignment is to become agile enough to stay ahead of the curve, and be adaptive to change, Bragg advises. The CIO should also speak early when sensing a possible business course deviation. “A modern digital corporation requires IT to be a good partner in driving to the future rather than dwelling on a stable state.” IT leaders also need to be agile enough to drive and support change, communicate effectively, and be transparent about current projects and initiatives. ... To build strong ties, IT leaders must also listen to and learn from their business counterparts. “IT leaders can’t create a plan to enable business priorities in a vacuum,” Haddad explains. “It’s better to ask [business] leaders to share their plans, removing the guesswork around business needs and intentions.” ... When IT and the business fail to align, silos begin to form. “In these silos, there’s minimal interaction between parties, which leads to misaligned expectations and project failures because the IT actions do not match up with the company direction and roadmap,” Bronson says. “When companies employ a reactive rather than a proactive approach, the result is an IT function that’s more focused on putting out fires than being a value-add to the business.”


Edge Extending the Reach of the Data Center

Savings in communications can be achieved, and low-latency transactions can be realized if mini-data centers containing servers, storage and other edge equipment are located proximate to where users work. Industrial manufacturing is a prime example. In this case, a single server can run entire assembly lines and robotics without the need to tap into the central data center. Data that is relevant to the central data center can be sent later in a batch transaction at the end of a shift. ... Organizations are also choosing to co-locate IT in the cloud. This can reduce the cost of on-site hardware and software, although it does increase the cost of processing transactions and may introduce some latency into the transactions being processed. In both cases, there are overarching network management tools that enable IT to see, monitor and maintain network assets, data, and applications no matter where they are. ... Most IT departments are not at a point where they have all of their IT under a central management system, with the ability to see, tune, monitor and/or mitigate any event or activity anywhere. However, we are at a point where most CIOs recognize the necessity of funding and building a roadmap to this “uber management” network concept.


Orchestrator agents: Integration, human interaction, and enterprise knowledge at the core

“Effective orchestration agents support integrations with multiple enterprise systems, enabling them to pull data and execute actions across the organizations,” Zllbershot said. “This holistic approach provides the orchestration agent with a deep understanding of the business context, allowing for intelligent, contextual task management and prioritization.” For now, AI agents exist in islands within themselves. However, service providers like ServiceNow and Slack have begun integrating with other agents. ... Although AI agents are designed to go through workflows automatically, experts said it’s still important that the handoff between human employees and AI agents goes smoothly. The orchestration agent allows humans to see where the agents are in the workflow and lets the agent figure out its path to complete the task. “An ideal orchestration agent allows for visual definition of the process, has rich auditing capability, and can leverage its AI to make recommendations and guidance on the best actions. At the same time, it needs a data virtualization layer to ensure orchestration logic is separated from the complexity of back-end data stores,” said Pega’s Schuerman.


The Transformative Potential of Edge Computing

Edge computing devices like sensors continuously monitor the car’s performance, sending data back to the cloud for real-time analysis. This allows for early detection of potential issues, reducing the likelihood of breakdowns and enabling proactive maintenance. As a result, the vehicle is more reliable and efficient, with reduced downtime. Each sensor relies on a hyperconnected network that seamlessly integrates data-driven intelligence, real-time analytics, and insights through an edge-to-cloud continuum – an interconnected ecosystem spanning diverse cloud services and technologies across various environments. By processing data at the edge, within the vehicle, the amount of data transmitted to the cloud is reduced. ... No matter the industry, edge computing and cloud technology require a reliable, scalable, and global hyperconnected network – a digital fabric – to deliver operational and innovative benefits to businesses and create new value and experiences for customers. A digital fabric is pivotal in shaping the future of infrastructure. It ensures that businesses can leverage the full potential of edge and cloud technologies by supporting the anticipated surge in network traffic, meeting growing connectivity demands, and addressing complex security requirements.


The risks and rewards of penetration testing

It is impossible to predict how systems may react to penetration testing. As was the case with our customer, an unknow flaw or misconfiguration can lead to catastrophic results. Skilled penetration testers usually can anticipate such issues. However, even the best white hats are imperfect. It is better to discover these flaws during a controlled test, then during a data breach. While performing tests, keep IT support staff available to respond to disruptions. Furthermore, do not be alarmed if your penetration testing provider asks you to sign an agreement that releases them from any liability due to testing. ... Black hats will generally follow the path of least resistance to break into systems. This means they will use well-known vulnerabilities they are confident they can exploit. Some hackers are still using ancient vulnerabilities, such as SQL injection, which date back to 1995. They use these because they work. It is uncommon for black hats to use unknown or “zero-day” exploits. These are reserved for high-value targets, such as government, military, or critical infrastructure. It is not feasible for white hats to test every possible way to exploit a system. Rather, they should focus on a broad set of commonly used exploits. Lastly, not every vulnerability is dangerous.


How Data Breaches Erode Trust and What Companies Can Do

A data breach can prompt customers to lose trust in an organisation, compelling them to take their business to a competitor whose reputation remains intact. A breach can discourage partners from continuing their relationship with a company since partners and vendors often share each other’s data, which may now be perceived as an elevated risk not worth taking. Reputational damage can devalue publicly traded companies and scupper a funding round for a private company. The financial cost of reputational damage may not be immediately apparent, but its consequences can reverberate for months and even years. ... In order to optimise cybersecurity efforts, organisations must consider the vulnerabilities particular to them and their industry. For example, financial institutions, often the target of more involved patterns like system intrusion, must invest in advanced perimeter security and threat detection. With internal actors factoring so heavily in healthcare, hospitals must prioritise cybersecurity training and stricter access controls. Major retailers that can’t afford extended downtime from a DoS attack must have contingency plans in place, including disaster recovery.



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Landry