Daily Tech Digest - September 08, 2024

The hidden cost of speed

The software development engine within a company is like the power grid: it’s a given that it works, and there are no celebrations or accolades for keeping the lights on. When it fails or goes down, however, everyone’s upset and what’s left is assigning blame and determining culpability. Unfortunately, in many industries, the responsible application and development of software is not considered until there’s a problem. There is no “working well” for a developer in an ecosystem without insight and intuition as to how difficult the workload is for various projects or positions. The black and white reality is simply ”Working” or “Not working, what the hell is going on, do we need to fire them, why is everything so slow lately?” This can be incredibly frustrating for developers. In my own experience, the person in the worst position is the developer brought in to clean up another developer’s mess. It’s now your responsibility not only to convince management that they need to slow down to give you time to fix things (which will stall sales), but also to architect everything, orchestrate the rollout, and coordinate with sales goals and marketing. 


Tracing The Destructive Path of Ransomware's Evolution

Contemporary attackers carefully select high-value organizations and infrastructure to cripple until substantial ransoms are paid — frequently upwards of seven figures for large corporations, hospitals, pipelines, and municipalities. Present-day ransomware groups’ techniques reflect a chilling professionalization of tactics. They leverage military-grade encryption, identity-hiding cryptocurrencies, data-stealing side efforts, and penetration testing of victims before attacks to determine maximum tolerances. Hackers often gain initial entry by purchasing access to systems from underground brokers, then deploy multipart extortion schemes, including threatening distributed denial-of-service (DDoS) attacks, if demands aren’t promptly met. Ransomware perpetrators also tap advancements like artificial intelligence (AI) to accelerate attacks through malicious code generation, underground dark web communities to coordinate schemes, and initial access markets to reduce overhead. ... Ransomware groups continue to innovate their attack methods. Supply chain attacks have become increasingly common. By compromising a single software supplier, attackers can access the networks of thousands of downstream customers.


Zero-Touch Provisioning Simplifies and Augments State and Local Networks

“With zero-touch provisioning unlocking greater time efficiencies, these agencies can more optimally serve the public,” he says. “For example, research shows that shaving mere seconds off emergency response calls yields more lives saved.” Government agencies also can reach wider and broader audiences and increase constituent trust by delivering crucial food and mobile healthcare services faster. Even agencies with strong budgets can benefit from more efficient spending thanks to zero-trust networking, DePreta adds. “By eliminating the need for manual intervention, government agencies can optimize budgets to better serve their communities and become smarter in the way they deliver services. From public services such as mobile healthcare clinics to public safety activities such as emergency response and disaster relief, ZTP enables government agencies to do more with less,” he says. ... “You can take a couple of devices and ship them to a branch, and someone who is not necessarily a technical expert in that branch can unbox them and plug them in. You are then up and running right away,” DeBacker says.


Why employee ‘will’ can make or break transformations

Leaders who focus on making work more meaningful and expressing their appreciation inspire and motivate employees. Previous McKinsey research shows that executives at organizations who invest time and effort in changing employee mindsets from the start are four times more likely than those who didn’t to say their change programs were successful. Indeed, employees notice when their bosses don’t change their own behaviors to adapt to the goals of transformation. ... he best ideas for how to implement transformation initiatives may come from frontline employees who are closest to the customer. Organizations that encourage employees to pursue innovation and continuous improvement see a higher share of employees that own initiatives or reach milestones during transformations. ... Once leaders have elevated a core group of employees to own initiatives or milestones, they should turn to empowering a broader group to serve as role models who can activate others. These change leaders—influencers, managers, and supervisors—play a visible role in shaping and amplifying the behaviors that enhance organizational performance while counteracting behaviors that get in the way of success.


Deploying digital twins: 7 challenges businesses can face and how to navigate them

An organization adopting digital twins needs to be well-networked. "The biggest roadblock to digital systems is connectivity, at the network and human levels," Thierry Klein, president of Nokia Bell Labs Solutions Research, told ZDNET. "Digital twins are most effective when multiple digital twins are integrated, but this requires collaboration among stakeholders, a robust digital network, and systems that can be connected to the digital twin." ... The ability to represent physical environments in real time also presents challenges to digital twin environments. "With digital twins, you're generally relying on your model to run parallel with some real-life physical system so you can understand certain effects that might be impacting the system," Naveen Rao, vice president of AI for Databricks, told ZDNET. ... "The lack of open, interoperable data standards presents another significant roadblock. "Antiquated technology, legacy proprietary data formats, and analog processes create silos of 'dark data' -- or data that's inaccessible to teams across the asset lifecycle," Shelly Nooner, vice president of innovation and platform for Trimble, told ZDNET. 


Why CEOs and Corporate Boards Can’t Afford to Get AI Governance Wrong

The first step in preparing for safe and successful AI adoption is establishing the necessary C-Suite governance structures. This needs to be a point of urgency, as far more advanced and powerful AI capabilities, including Artificial General Intelligence (AGI), where AI may be able to perform human cognitive tasks better than the smartest human being, loom on the horizon. BCG published a leadership report earlier this year entitled “Every C-Suite Member Is Now a Chief AI Officer.” ... Corporate leadership and boards must determine how best to manage the risks and opportunities presented by AI to serve its customers and to protect its stakeholders. To begin with, they must identify where management responsibility should sit, and how these responsibilities should be structured. BCG’s report states that from the CEO on down, there needs to be at minimum, “a basic understanding of GenAI, particularly with respect to security and privacy risks,” adding that business leaders “must have confidence that all decisions strike the right balance between risk and business benefit.”


Get ready for a tumultuous era of GPU cost volitivity

Demand is almost certain to increase as companies continue to build AI at a rapid pace. Investment firm Mizuho has said the total market for GPUs could grow tenfold over the next five years to more than $400 billion, as businesses rush to deploy new AI applications. Supply depends on several factors that are hard to predict. They include manufacturing capacity, which is costly to scale, as well as geopolitical considerations — many GPUs are manufactured in Taiwan, whose continued independence is threatened by China. Supplies have already been scarce, with some companies reportedly waiting six months to get their hands on Nvidia’s powerful H100 chips. As businesses become more dependent on GPUs to power AI applications, these dynamics mean that they will need to get to grips with managing variable costs. ... To lock in costs, more companies may choose to manage their own GPU servers rather than renting them from cloud providers. This creates additional overhead but provides greater control and can lead to lower costs in the longer term. Companies may also buy up GPUs defensively: Even if they don’t know how they’ll use them yet, these defensive contracts can ensure they’ll have access to GPUs for future needs — and that their competitors won’t.


Optimizing Continuous Deployment at Uber: Automating Microservices in Large Monorepos

The newly designed system, named Up CD, was designed to improve automation and safety. It is tightly integrated with Uber's internal cloud platform and observability tools, ensuring that deployments follow a standardized and repeatable process by default. The new system prioritized simplicity and transparency, especially in managing monorepos. One key improvement was optimizing deployments by looking at which services were affected by each commit, rather than deploying every service with every code change. This reduced unnecessary builds and gave engineers more clarity over the changes impacting their services. ... Up introduced a unified commit flow for all services, ensuring that each service progressed through a series of deployment stages, each with its own safety checks. These conditions included time delays, deployment windows, and service alerts, ensuring deployments were triggered only when safe. Each stage operated independently, allowing flexibility in customizing deployment flows while maintaining safety. This new approach reduced manual errors and provided a more structured deployment experience.


Cybercriminals use legitimate software for attacks increasing

The report underscores the growing trend of attackers adopting legitimate tools to evade security measures and deceive security personnel. These tools are used for various malicious activities, including spreading ransomware, conducting network scanning, lateral movement within networks, and establishing command-and-control (C2) operations. Among the tools identified in the report are PDQ Deploy, PSExec, Rclone, SoftPerfect, AnyDesk, ScreenConnect, and WMIC. A series of case studies detailed in the report highlights specific incidents involving these tools. Between September 2023 and August 2024, 22 posts on various criminal forums discussed or shared cracked versions of the SoftPerfect network scanner. ... Remote management and monitoring (RMM) tools like AnyDesk and ScreenConnect are also prominently featured in criminal discussions. An August 2024 post on the RAMP forum described using AnyDesk during a penetration test and recommended disabling secure logon for successful connections. Initial Access Brokers (IABs) frequently sell access to networks through these established remote management and monitoring tool connections.


Principles of Modern Data Infrastructure

Designing a modern data infrastructure to fail fast means creating systems that can quickly detect and handle failures, improving reliability and resilience. If a system goes down, most of the time, the problem is with the data layer not being able to handle the stress rather than the application compute layer. While scaling, when one or more components within the data infrastructure fail, they should fail fast and recover fast. In the meantime, since the data layer is stateful, the whole fail-and-recovery process should minimize data inconsistency as well. ... By default, databases and data stores need to be able to respond quickly to user queries under heavy throughput. Users expect a real-time or near-real-time experience from all applications. Much of the time, even a few milliseconds, is too slow. For instance, a web API request may translate to one or a few queries to the primary on-disk database and then a few to even tens of operations to the in-memory data store. For each in-memory data store operation, a sub-millisecond response time is a bare necessity for an expected user experience.



Quote for the day:

Leaders must be good listeners. It's rule number one, and it's the most powerful thing they can do to build trusted relationships. - Lee Ellis

Daily Tech Digest - September 07, 2024

Why RAG Is Essential for Next-Gen AI Development

The success of RAG implementation often depends on a company’s willingness to invest in curating and maintaining high-quality knowledge sources. Failure to do this will severely impact RAG performance and may lead to LLM responses of much poorer quality than expected. Another difficult task that companies frequently run into is developing an effective retrieval mechanism. Dense retrieval, a semantic search technique, and learned retrieval, which involves the system recalling information, are two approaches that produce favorable results. Many companies need help integrating RAG into existing AI systems and scaling RAG to handle large knowledge bases. Potential solutions to these challenges include efficient indexing and caching and implementing distributed architectures. Another common problem is properly explaining the reasoning behind RAG-generated responses, as they often involve information taken from multiple sources and models. ... By integrating external knowledge sources, RAG helps LLMs prevail over the limitations of a parametric memory and dramatically reduce hallucinations. As Douwe Keila, an author of the original paper about RAG, said in a recent interview


A global assessment of third-party connection tampering

To be clear, there are many reasons a third party might tamper with a connection. Enterprises may tamper with outbound connections from their networks to prevent users from interacting with spam or phishing sites. ISPs may use connection tampering to enforce court or regulatory orders that demand website blocking to address copyright infringement or for other legal purposes. Governments may mandate large-scale censorship and information control. Despite the fact that everyone knows it happens, no other large operation has previously looked at the use of connection tampering at scale and across jurisdictions. We think that creates a notable gap in understanding what is happening in the Internet ecosystem, and that shedding light on these practices is important for transparency and the long-term health of the Internet. ... Ultimately, connection tampering is possible only by accident – an unintended side effect of protocol design. On the Internet, the most common identity is the domain name. In a communication on the Internet, the domain name is most often transmitted in the “server name indication (SNI)” field in TLS – exposed in cleartext for all to see.


The human brain deciphered and the first neural map created

The formation of such a neural map was made possible with the help of several technologies. First, as mentioned earlier, the employment of electron microscopy enabled the researchers to obtain images of the brain tissue at a scale that could capture details of synapses. Such papers provided the necessary level of detail to reveal how neurons are connected and can communicate with other neurons. Second, the massive volume of data produced by the imaging process needed high computing capability and machine learning to parse and analyze. It was also claimed that the company’s experience in AI and data processing was helpful in the correct positioning of the 2D images into a 3D one and in the proper segmentation of many of the parts of the brain tissue. Last of all, the decision to share the neural map as an open-access database has extended the potential for future research and cooperation in the sphere of neuroscience. The development of this neural map has excellent potential for neuroscience and other disciplines. In neuropharmacology, the map offers an opportunity to gain a substantial amount of information about how neurons are wired within the brain and how certain diseases, such as schizophrenia or autism, occur.


InfoQ AI, ML and Data Engineering Trends Report - September 2024

The AI-enabled agent programs are another area that’s seeing a lot of innovation. Autonomous agents and GenAI-enabled virtual assistants are coming up in different places to help software developers become more productive. AI-assisted programs can enable individual team members to increase productivity or collaborate with each other. Gihub’s Copilot, Microsoft Teams’ Copilot, DevinAI, Mistral’s Codestral, and JetBrains’ local code completion are some examples of AI agents. GitHub also recently announced its GitHub Models product to enable the large community of developers to become AI engineers and build with industry-leading AI models. ... With the emergence of multi-model language models like GPT-4o, privacy and security when handling non-textual data like videos become even more critical in the overall machine learning pipelines and DevOps processes. The podcast panelist’s AI safety and security recommendations are to have a comprehensive lineage and mapping of where your data is going. Train your employees to have proper data privacy security practices, and also make the secure path the path of least resistance for them so everyone within your organization easily adopts it.


Does it matter what kind of hard drive you use in a NAS?

Consumer drives aren't designed for heavier workloads, nor are they built with multiple units running adjacent to one another. This can cause issues with vibrations, particularly for 3.5-inch mechanical drives. Firmware and endurance are other concerns since the drives themselves won't be built with RAID and NAS in mind. Combining the two with heavier workloads through multiple user accounts and clients could lead to easier drive failure. These drives will be cheaper than their NAS equivalents, however, and no drive is immune to failure. You could see consumer drives outlive NAS drives inside the same enclosure. ... Shingled magnetic recording (SMR) and conventional magnetic recording (CMR) are two types of storage technologies used for storing data on spinning platters inside an HDD. CRM uses concentric circles (or tracks) for saving data, which are segmented into sectors. Everything is recorded linearly with each sector being written and read independently, allowing specific sectors to be rewritten without affecting any other sector on the drive. SMR is a newer technology that takes the same concentric circles approach but instead overlaps the tracks to bolster storage capacity but performance suffers alongside reliability.


What’s next in AI and HPC for IT leaders in digital infrastructure?

The AI nirvana for enterprises? In 2024, we'll see enterprises build ChatGPT-like GenAI systems for their own internal information resources. Since many companies' data resides in silos, there is a real opportunity to manage AI demand, build AI expertise, and cross-functional department collaboration. This access to data comes with an existential security risk that could strike at the heart of a company: intellectual property. That’s why in 2024, forward-thinking enterprises will use AI for robust data security and privacy measures to ensure intellectual property doesn’t get exposed on the public internet. They will also shrink the threat landscape by honing in on internal security risks. This includes the development of internal regulations to ensure sensitive information isn't leaked to non-privileged internal groups and individuals. ... At this early stage of AI initiatives, enterprises are dependent on technology providers and their partners to advise and support the global roll-out of AI initiatives. In Asia Pacific, it’s a race to build, deploy, and subsequently train the right AI clusters. Since a prime use case is cybersecurity threat detection, working with the respective cybersecurity technology providers is key.


Red Hat unleashes Enterprise Linux AI - and it's truly useful

In a statement, Joe Fernandes, Red Hat's Foundation Model Platform vice president, said, "RHEL AI provides the ability for domain experts, not just data scientists, to contribute to a built-for-purpose gen AI model across the hybrid cloud while also enabling IT organizations to scale these models for production through Red Hat OpenShift AI." RHEL AI isn't tied to any single environment. It's designed to run wherever your data lives -- whether it be on-premise, at the edge, or in the public cloud. This flexibility is crucial when implementing AI strategies without completely overhauling your existing infrastructure. The program is now available on Amazon Web Services (AWS) and IBM Cloud as a "bring your own (BYO)" subscription offering. In the next few months, it will be available as a service on AWS, Google Cloud Platform (GCP), IBM Cloud, and Microsoft Azure. Dell Technologies has announced a collaboration to bring RHEL AI to Dell PowerEdge servers. This partnership aims to simplify AI deployment by providing validated hardware solutions, including NVIDIA accelerated computing, optimized for RHEL AI.


Quantum computing is coming – are you ready?

The good thing is that awareness of the challenge is increasing. Some verticals, such as finance, have it absolutely top of mind with some already having quantum safe algorithms in production. Likewise, some manufacturing sectors are examining the impact, given the implications of having to upgrade embedded or IoT devices. And, of course, medical devices offer a particularly heightened security and trust challenge. "I think for these device manufacturers, they had a moment where they realized they can't go ahead and push the devices out as fast as they are without thinking about proper security," says Hojjati. But not everyone is on top of the problem. Which is why DigiCert is backing Quantum Readiness Day on September 26, to coincide with the expected finalization of the new algorithms by NIST. The worldwide event will bring together experts, both in how to break encryption and how to implement the upcoming post quantum algorithms, helping you make sure you're ahead of the problem. As Hojjati says, whether we've reached Q Day or not, "This is real, this is here, the standards have been released. ..."


How cyberattacks on offshore wind farms could create huge problems

Successful cyberattacks could lower public trust in wind energy and other renewables, the report from the Alan Turing Institute says. The authors add that artificial intelligence (AI) could help boost the resilience of offshore wind farms to cyber threats. However, government and industry need to act fast. The fact that offshore wind installations are relatively remote makes them particularly vulnerable to disruption. Land turbines can have nearby offices, so getting someone to visit the site is much easier than at sea. Offshore turbines tend to require remote monitoring and special technology for long distance communication. These more complicated solutions mean that things can go wrong more easily. ... Most cyberattacks are financially motivated, such as the ransomware attacks that have targeted the NHS in recent years. These typically block the users’ access to their computer data until a payment is made to the hackers. But critical infrastructure such as energy installations are also exposed. There may be various motivations for launching cyberattacks against them. One important possibility is that of a hostile state that wants to disrupt the UK’s energy supply – and perhaps also undermine public confidence in it.


Data Skills Gap Is Hampering Productivity; Is Upskilling the Answer?

"A well-crafted data strategy will highlight where specific skills need to be developed to achieve business objectives," said Michael Curry, president of data modernization at Rocket Software. He explained that since a data strategy typically involves both risk mitigation and value realization, it's important to consider skill gaps on both sides. Kjell Carlsson, head of AI strategy at Domino Data Labs, said better data prep, analysis, and visualization skills would help organizations become more data-driven and make better decisions that would significantly improve growth and curtail waste. "Imbuing your workforce with better prompt engineering skills will help them code, research, and write vastly more efficiently," he said. "A well-crafted data strategy will highlight where specific skills need to be developed to achieve business objectives," said Michael Curry, president of data modernization at Rocket Software. He explained that since a data strategy typically involves both risk mitigation and value realization, it's important to consider skill gaps on both sides. ... "Imbuing your workforce with better prompt engineering skills will help them code, research, and write vastly more efficiently," he said.



Quote for the day:

"Leadership should be born out of the understanding of the needs of those who would be affected by it." -- Marian Anderson

Daily Tech Digest - September 06, 2024

Quantum utility: The next milestone on the road to quantum advantage

“Quantum utility is a term that has only been coined recently, in the last 12 months or so. On the timeline that I’ve just described, there is a milestone that sits between where we are now and the beginning of this quantum advantage era. And that is this quantum utility concept. It’s basically where quantum computers are able to demonstrate, or in this case, in recent demonstrations, simulate a problem beyond the capabilities of just brute force classical computation using sufficiently large quantum computational devices. So, in this case, devices with more than 100 qubits,” she says. ... “It’s really an indication of how close we are to demonstrating quantum advantage, and where we can hopefully begin to see quantum computing computers serving as a scientific tool to explore a new scale of problems beyond brute force, classical simulation. So, it’s an indication of how close we are to quantum advantage and ideally, we’ll be hoping to see some demonstration of that in the next few years. No one really knows exactly when, but the idea is that those who are able to harness this era of quantum utility will also be among the first to achieve real quantum advantage as well.”


5 tips for switching to skills-based hiring

Skills come in a variety forms, such as hard skills, which comprise the technical skills necessary to complete tasks; soft skills, which center around a person’s interpersonal skills; and cognitive skills, which include problem solving, decision making, and logical reasoning, among other skills. Before embarking on a skills-based hiring strategy, it’s vital to have clear insight into the skills your organization already has internally, in addition to all the skills needed to complete projects and reach business goals. As you identify and categorize skills, it’s important to review job descriptions as well to ensure they’re up-to-date and don’t include any unnecessary skills or vague requirements. It’s crucial as well to evaluate how your job descriptions are written to ensure you’re drawing in the right talent for open roles. Wording job descriptions can be especially tricky when it comes to soft skills. For example, if your organization values someone who’s humble or savvy, you’ll need to identify how that translates to a skill you can list on a job description and, eventually, verify, says Hannah Johnson, senior VP for strategy and market development at IT trade association CompTIA.


Could California's AI Bill Be a Blueprint for Future AI Regulation?

“If approved, legislation in an influential state like California could help to establish industry best practices and norms for the safe and responsible use of AI,” Ashley Casovan, managing director, AI Governance Center at non-profit International Association of Privacy Professionals (IAPP), says in an email interview. California is hardly the only place with AI regulation on its radar. The EU AI Act passed earlier this year. The federal government in the US released an AI Bill of Rights, though this serves as guidance rather than regulation. Colorado and Utah enacted laws applying to the use of AI systems. “I expect that there will be more domain-specific or technology-specific legislation for AI emerging from all of the states in the coming year,” says Casovan. As quickly as it seems new AI legislation, and the accompanying debates, pops up, AI moves faster. “The biggest challenge here…is that the law has to be broad enough because if it's too specific maybe by the time it passes, it is already not relevant,” says Ruzzi. Another big part of the AI regulation challenge is agreeing on what safety in AI even means. “What safety means is…very multifaceted and ill-defined right now,” says Vartak.


Why and How to Secure GenAI Investments From Day Zero

Because GenAI remains a relatively novel concept that many companies are officially using only in limited contexts, it can be tempting for business decision-makers to ignore or downplay the security stakes of GenAI for the time being. They assume there will be time to figure how to secure large language models (LLMs) and mitigate data privacy risks later, once they’ve established basic GenAI use cases and strategies. Unfortunately, this attitude toward GenAI is a huge mistake, to put it mildly. It’s like learning to pilot a ship without thinking about what you’ll do if the ship sinks, or taking up a high-intensity sport without figuring out how to protect yourself from injury until you’ve already broken a bone. A healthier approach to GenAI is one in which organizations build security protections from the start. Here’s why, along with tips on how to integrate security into your organization’s GenAI strategy from day zero. ... GenAI security and data privacy challenges exist regardless of the extent to which an organization has adopted GenAI or which types of use cases it’s targeting. It’s not as if they only matter for companies making heavy use of AI or using AI in domains where special security, privacy or compliance risks apply.


US, UK and EU sign on to the Council of Europe’s high-level AI safety treaty

The high-level treaty sets out to focus on how AI intersects with three main areas: human rights, which includes protecting against data misuse and discrimination, and ensuring privacy; protecting democracy; and protecting the “rule of law.” Essentially the third of these commits signing countries to setting up regulators to protect against “AI risks.” The more specific aim of the treaty is as lofty as the areas it hopes to address. “The treaty provides a legal framework covering the entire lifecycle of AI systems,” the COE notes. “It promotes AI progress and innovation, while managing the risks it may pose to human rights, democracy and the rule of law. To stand the test of time, it is technology-neutral.” ... The idea seems to be that if AI does represent a mammoth change to how the world operates, if not watched carefully, not all of those changes may turn out to be for the best, so it’s important to be proactive. However there is also clearly nervousness among regulators about overstepping the mark and being accused of crimping innovation by acting too early or applying too broad a brush. AI companies have also jumped in early to proclaim that they, too, are just as interested in what’s come to be described as AI Safety. 


Fight Against Ransomware and Data Threats

Ransomware as a Service (RaaS) is becoming a massive industry. The tools to create ransomware attacks are readily available online, and it’s becoming easier for people even those with limited technical skills to launch attacks. We have the largest pool of software developers in the world, and unfortunately, a small portion of them see ransomware as a way to make easy money. There are even reports of recruitment drives in certain states to hire engineers or tech-savvy individuals to develop ransomware software. ... The industries most affected by ransomware tend to be those that are heavily regulated, such as BFSI (Banking, Financial Services, and Insurance), healthcare, and insurance. These industries deal with highly valuable, critical data, which makes them prime targets for attackers. Because of the sensitive nature of the data they handle, these organizations are often willing to pay the ransom to get it back. The reason these industries are so heavily regulated is that they’re dealing with data that is more critical than in other industries. Healthcare companies, for example, are regulated by agencies like the FDA in the U.S. and their Indian equivalent. Financial services are regulated by the RBI or SEBI in India. 


Cloud Security Assurance: Is Automation Changing the Game?

For cloud workloads, security assurance teams must assess and gather evidence for each component’s adherence to security standards, including for components and configurations the cloud provider runs. Luckily, cloud providers offer downloadable assurance and compliance certificates. These certificates and reports are essential for the cloud providers’ business. Larger customers, especially, work only with vendors that adhere to the standards relevant to these customers. The exact standards vary by the customers’ jurisdiction and industry. Figure 3 illustrates the extensive range of global, country-specific, and industry-specific standards Azure (for example) provides for download to their customers and prospects. ... These cloud security assurance reports cover the infrastructure layer and the security of the cloud provider’s IaaS, PaaS, and SaaS services. They do not cover customer-specific configurations, patching, or operations, including securing AWS S3 buckets against unauthorized access or patching VMs (Figure 4). Whether customers configure these services securely and put them adequately together is in the customers’ hands – and the customer security assurance team must validate that.


The Road from Chatbots and Co-Pilots to LAMs and AI Agents

We are beginning an evolution from knowledge-based, gen-AI-powered tools–say, chatbots that answer questions and generate content–to gen AI–enabled ‘agents’ that use foundation models to execute complex, multistep workflows across a digital world,” analysts with the consulting giant write. “In short, the technology is moving from thought to action.” AI agents, McKinsey says, will be able to automate “complex and open-ended use cases” thanks to three characteristics they possess, including: the capability to manage multiplicity; the capability to be directed by natural language; and the capability to work with existing software tools and platforms. ... “Although agent technology is quite nascent, increasing investments in these tools could result in agentic systems achieving notable milestones and being deployed at scale over the next few years,” the company writes. PC acknowledges that there are some challenges to building automated applications with the LAM architecture at this point. LLMs are probabilistic and sometimes can go off the rails, so it’s important to keep them on track by combining them with classical programming using deterministic techniques.


Are you ready for data hyperaggregation?

Data hyperaggregation is not simply a technological advancement. It’s a strategic initiative that aligns with the broader trend of digital transformation. Its ability to provide a unified view of disparate data sources empowers organizations to harness their data effectively, driving innovation and creating competitive advantages in the digital landscape. As the field continues to evolve, the fusion of data hyperaggregation with cutting-edge technologies will undoubtedly shape the future of cloud computing and enterprise data strategies. The problems and solutions related to enterprise data aggregation are familiar. Indeed, I wrote books about it in the 1990s. In 2024, we still can’t get it right. The problems have actually gotten much worse with the addition of cloud providers and the unwillingness to break down data silos within enterprises. Things didn’t get simpler, they got more complex. Now, AI needs access to most data sources that enterprises maintain. Because universal access methodologies still don’t exist, we invented a new buzzword, “data hyperaggregation.” If this iteration of data gathering catches on, we get to solve the disparate data problem for more reasons than just AI. I hold out hope. Am I naive? We’ll see.


Unlock Business Value Through Effective DevOps Infrastructure Management

Whatever mix of architectures an organization uses, however, the best strategy is rooted in their specific needs, focusing on profitability and customer satisfaction. Overly complex systems not only cost more, but they also reduce the return on investment (ROI) and efficiency. Innovation delivers services to customers faster and more efficiently than before. With the plethora of technologies available today, it's imperative for organizations to be clear about what provides real value to reduce the cost and time spent on infrastructure issues. ... Adopting DevOps infrastructure management practices encourages the use of solutions like IaC, making deployments more repeatable, scalable, and reliable. Automation and continuous monitoring free up resources to focus on a broader range of tasks, including security, developer experience, and time to market. Robust documentation processes are critical to preserve this culture of continuous improvement, efficiency, and productivity over time. Should a project be handed to a new team, documentation helps maintain continuity and can reveal historical inefficiencies or issues. 



Quote for the day:

“People are not lazy. They simply have important goals – that is, goals that do not inspire them.” -- Tony Robbins

Daily Tech Digest - September 05, 2024

What Does the Car of the Future Look Like?

Enabled with IoT, the vehicles stay in sync with their environments. The ConnectedDrive feature, for example, enables predictive maintenance by using IoT sensors to monitor vehicle health and performance in real time and notify drivers about upcoming maintenance needs. IoT also paves the way for vehicle-to-infrastructure, or V2X, communication, which enables BMW cars to interact with traffic lights, road signs and other vehicles. But a smart car is more than just internet-connected. ... The next leap in sensor technology is quantum sensing. Image generation systems based on infrared, ultrasound and radar are already in use. But with multisensory systems, BMW vehicles will not only be able to detect potential hazards more accurately but also predict and prevent damage - a capability crucial for automated and autonomous driving systems. These sensors will allow vehicles to "feel" their surroundings, enabling more refined surface control and the ability to perform complex tasks, such as the automated assembly of intricate components. Predictive maintenance, powered by multisensory input, will serve as an early warning system in production, reducing downtime.


NIST Cybersecurity Framework (CSF) and CTEM – Better Together

CSF's core functions align well with the CTEM approach, which involves identifying and prioritizing threats, assessing the organization's vulnerability to those threats, and continuously monitoring for signs of compromise. Adopting CTEM empowers cybersecurity leaders to significantly mature their organization's NIST CSF compliance. Prior to CTEM, periodic vulnerability assessments and penetration testing to find and fix vulnerabilities was considered the gold standard for threat exposure management. The problem was, of course, that these methods only offered a snapshot of security posture – one that was often outdated before it was even analyzed. CTEM has come to change all this. The program delineates how to achieve continuous insights into the organizational attack surface, proactively identifying and mitigating vulnerabilities and exposures before attackers exploit them. To make this happen, CTEM programs integrate advanced tech like exposure assessment, security validation, automated security validation, attack surface management, and risk prioritization.


Leveling Up to Responsible AI Through Simulations

This simulation highlighted the challenges and opportunities involved in embedding responsible AI practices within Agile development environments. The lessons learned from this exercise are clear: expertise, while essential, must be balanced with cross-disciplinary collaboration; incentives need to be aligned with ethical outcomes; and effective communication and documentation are crucial for ensuring accountability. Moving forward, organizations must prioritize the development of frameworks and cultures that support responsible AI. This includes creating opportunities for ongoing education and reflection, fostering environments where diverse perspectives are valued, and ensuring that all stakeholders—from engineers to policymakers—are equipped and incentivized to navigate the complexities of responsible Agile AI development. Simulations like the one we conducted are a valuable tool in this effort. By providing a realistic, immersive experience, they help professionals from diverse backgrounds understand the challenges of responsible AI development and prepare them to meet these challenges in their own work. As AI continues to evolve and become increasingly integrated into our lives, the need for responsible development practices will only grow.


What software supply chain security really means

Upon reflection, the “supply chain” aspect of software supply chain security suggests the crucial ingredient of an improved definition. Software producers, like manufacturers, have a supply chain. And software producers, like manufacturers, require inputs and then perform a manufacturing process to build a finished product. In other words, a software producer uses components, developed by third parties and themselves, and technologies to write, build, and distribute software. A vulnerability or compromise of this chain, whether done via malicious code or via the exploitation of an unintentional vulnerability, is what defines software supply chain security. I should mention that a similar, rival data set maintained by the Atlantic Council uses this broader definition. I admit to still having one general reservation about this definition: It can feel like software supply chain security subsumes all of software security, especially the sub-discipline often called application security. When a developer writes a buffer overflow in the open source software library your application depends upon, is that application security? Yep! Is that also software supply chain security?


Data privacy and security in AI-driven testing

As the technology has become more accepted and widespread, the focus has shifted from disbelief in its capabilities to a deep concern for how it handles sensitive data. At Typemock, we’ve adapted to this shift by ensuring that our AI-driven tools not only deliver powerful testing capabilities but also prioritize data security at every level. ... While concerns about IP leakage and data permanence are significant today, there is a growing shift in how people perceive data sharing. Just as people now share everything online, often too loosely in my opinion, there is a gradual acceptance of data sharing in AI-driven contexts, provided it is done securely and transparently.Greater Awareness and Education: In the future, as people become more educated about the risks and benefits of AI, the fear surrounding data privacy may diminish. However, this will also require continued advancements in AI security measures to maintain trust. Innovative Security Solutions: The evolution of AI technology will likely bring new security solutions that can better address concerns about data permanence and IP leakage. These solutions will help balance the benefits of AI-driven testing with the need for robust data protection.


QA's Dead: Where Do We Go From Here?

Developers are now the first line of quality control. This is possible through two initiatives. First, iterative development. Agile methodologies mean teams now work in short sprints, delivering functional software more frequently. This allows for continuous testing and feedback, catching issues earlier in the process. It also means that quality is no longer a final checkpoint but an ongoing consideration throughout the development cycle. Second, tooling. Automated testing frameworks, CI/CD pipelines, and code quality tools have allowed developers to take on more quality control responsibilities without risking burnout. These tools allow for instant feedback on code quality, automated testing on every commit, and integration of quality checks into the development workflow. ... The first opportunity is down the stack, moving into more technical roles. QA professionals can leverage their quality-focused mindset to become automation specialists or DevOps engineers. Their expertise in thorough testing can be crucial in developing robust, reliable automated test suites. The concept that "flaky tests are worse than no tests" becomes even more critical when the tests are all that stop an organization from shipping low-quality code.


Serverless Is Trending Again in Modern Application Development

A better definition has emerged as serverless becomes a path to developer productivity. The term "serverless" was always a misnomer and, even among end users and vendors, tended to mean different things depending on product and use case. Just as the cloud is someone else's computer, serverless is still someone else's server. Today, things are much clearer. A serverless application is a software component that runs inside of an environment that manages the underlying complexity of deployment, runtimes, protocols, and process isolation so that developers can focus on their code. Enterprise success stories delivered proven, repeatable use case solutions. The initial hype around serverless centered around fast development cycles and back-end use cases where serverless functions acted as the glue between disparate cloud services. ... Since then, we've seen many more enterprise customers taking advantage of serverless. An expanded ecosystem of ancillary services drives emerging use cases. The core use case of serverless remains building lightweight, short-running ephemeral functions.


New AI standards group wants to make data scraping opt-in

The Dataset Providers Alliance, a trade group formed this summer, wants to make the AI industry more standardized and fair. To that end, it has just released a position paper outlining its stances on major AI-related issues. The alliance is made up of seven AI licensing companies, including music copyright-management firm Rightsify, Japanese stock-photo marketplace Pixta, and generative-AI copyright-licensing startup Calliope Networks. ... The DPA advocates for an opt-in system, meaning that data can be used only after consent is explicitly given by creators and rights holders. This represents a significant departure from the way most major AI companies operate. Some have developed their own opt-out systems, which put the burden on data owners to pull their work on a case-by-case basis. Others offer no opt-outs whatsoever. The DPA, which expects members to adhere to its opt-in rule, sees that route as the far more ethical one. “Artists and creators should be on board,” says Alex Bestall, CEO of Rightsify and the music-data-licensing company Global Copyright Exchange, who spearheaded the effort. Bestall sees opt-in as a pragmatic approach as well as a moral one: “Selling publicly available datasets is one way to get sued and have no credibility.”


AI potential outweighs deepfake risks only with effective governance: UN

“AI must serve humanity equitably and safely,” Guterres says. “Left unchecked, the dangers posed by artificial intelligence could have serious implications for democracy, peace and stability. Yet, AI has the potential to promote and enhance full and active public participation, equality, security and human development. To seize these opportunities, it is critical to ensure effective governance of AI at all levels, including internationally.” ... The flurry of laws also concern worker protections – which in Hollywood means protecting actors and voice actors from being replaced with deepfake AI clones. Per AP, the measure mirrors language in the deal SAG-AFTRA made with movie studios last December. The state is also to consider imposing penalties on those who clone the dead without obtaining consent from the deceased’s estate – a bizarre but very real concern, as late celebrities begin popping up in studio films. ... If you find yourself suffering from deepfake despair, Siddharth Gandhi is here to remind you that there are remedies. Writing in ET Edge, the COO of 1Kosmos for Asia Pacific says strong security is possible by pairing liveness detection with device-based algorithmic systems that can detect injection attacks in real-time.


Red Hat delivers AI-optimized Linux platform

RHEL AI helps enterprises get away from the “one model to rule them all” approach to generative AI, which is not only expensive but can lock enterprises into a single vendor. There are now open-source large language models available that rival those available from the commercial vendors in performance. “And there are smaller models,” Katarki adds, “which are truly aligned to your specific use cases and your data. They offer much better ROI and much better overall costs compared to large language models in general.” And not only the models themselves but the tools needed to train them are also available from the open-source community. “The open-source ecosystem is really fueling generative AI, just like Linux and open source powered the cloud revolution,” Katarki says. In addition to allowing enterprises to run generative AI on their own hardware, RHEL AI also supports a “bring your own subscription” for public cloud users. At launch, RHEL AI supports AWS and the IBM cloud. “We’ll be following that with Azure and GCP in the fourth quarter,” Katarki says. RHEL AI also has guardrails and agentic AI on its roadmap. “Guardrails and safety are one of the value-adds of Instruct Lab and RHEL AI,” he says.



Quote for the day:

"Without continual growth and progress, such words as improvement, achievement, and success have no meaning." -- Benjamin Franklin

Daily Tech Digest - September 04, 2024

What is HTTP/3? The next-generation web protocol

HTTPS will still be used as a mechanism for establishing secure connections, but traffic will be encrypted at the HTTP/3 level. Another way to say it is that TLS will be integrated into the network protocol instead of working alongside it. So, encryption will be moved into the transport layer and out of the app layer. This means more security by default—even the headers in HTTP/3 are encrypted—but there is a corresponding cost in CPU load. Overall, the idea is that communication will be faster due to improvements in how encryption is negotiated, and it will be simpler because it will be built-in at a lower level, avoiding the problems that arise from a diversity of implementations. ... In TCP, that continuity isn’t possible because the protocol only understands the IP address and port number. If either of those changes—as when you walk from one network to another while holding a mobile device—an entirely new connection must be established. This reconnection leads to a predictable performance degradation. The QUIC protocol introduces connection IDs or CIDs. For security, these are actually CID sets negotiated by the server and client. 


6 things hackers know that they don’t want security pros to know that they know

It’s not a coincidence that many attacks happen at the most challenging of times. Hackers really do increase their attacks on weekends and holidays when security teams are lean. And they’re more likely to strike right before lunchtime and end-of-day, when workers are rushing and consequently less attentive to red flags indicating a phishing attack or fraudulent activity. “Hackers typically deploy their attacks during those times because they’re less likely to be noticed,” says Melissa DeOrio, global threat intelligence lead at S-RM, a global intelligence and cybersecurity consultancy. ... Threat actors actively engage in open-source intelligence (OSINT) gathering, looking for information they can use to devise attacks, Carruthers says. It’s not surprising that hackers look for news about transformative events such as big layoffs, mergers and the like, she says. But CISOs, their teams and other executives may be surprised to learn that hackers also look for news about seemingly innocuous events such as technology implementations, new partnerships, hiring sprees, and executive schedules that could reveal when they’re out of the office.


Take the ‘Shift Left’ Approach a Step Further by ‘Starting Left’

This makes it vital to guarantee code quality and security from the start so that nothing slips through the cracks. Shift left accounts for this. It minimizes risks of bugs and vulnerabilities by introducing code testing and analysis earlier in the SLDC, catching problems before they mount and become trickier to solve or even find. Advancing testing activities earlier puts DevOps teams in a position to deliver superior-quality software to customers with greater frequency. As a practice, “shift left” requires a lot more vigilance in today’s security landscape. But most development teams don’t have the mental (or physical) bandwidth to do it properly — even though it should be an intrinsic part of code development strategy. In fact, the Linux Foundation revealed in a study recently that almost one-third of developers aren’t familiar with secure software development practices. “Shifting left” — performing analysis and code reviews earlier in the development process — is a popular mindset for creating better software. What the mindset should be, though, is to “start left,” not just impose the burden later on in the SDLC for developers. ... This mindset of “start left” focuses not only on an approach that values testing early and often, but also on using the best tools to do so. 


ONCD Unveils BGP Security Road Map Amid Rising Threats

The guidance comes amid an intensified threat landscape for BGP, which serves as the backbone of global internet traffic routing. BGP is a foundational yet vulnerable protocol, developed at a time when many of today's cybersecurity risks did not exist. Coker said the ONCD is committed to covering at least 60% of the federal government's IP space by registration service agreements "by the end of this calendar year." His office recently led an effort to develop a federal RSA template that federal agencies can use to facilitate their adoption of Resource Public Key Infrastructure, which can be used to mitigate BGP vulnerabilities. ... The ONCD report underscores how BGP "does not provide adequate security and resilience features" and lacks critical security capabilities, including the ability to validate the authority of remote networks to originate route announcements and to ensure the authenticity and integrity of routing information. The guidance tasks network operators with developing and periodically updating cybersecurity risk management plans that explicitly address internet routing security and resilience. It also instructs operators to identify all information systems and services internal to the organization that require internet access and assess the criticality of maintaining those routes for each address.


Efficient DevSecOps Workflows With a Little Help From AI

When it comes to software development, AI offers lots of possibilities to enhance workflows at every stage—from splitting teams into specialized roles such as development, operations, and security to facilitating typical steps like planning, managing, coding, testing, documentation, and review. AI-powered code suggestions and generation capabilities can automate tasks like autocompletion and identification of missing dependencies, making coding more efficient. Additionally, AI can provide code explanations, summarizing algorithms, suggesting performance improvements, and refactoring long code into object-oriented patterns or different languages. ... Instead of manually sifting through job logs, AI can analyze them and provide actionable insights, even suggesting fixes. By refining prompts and engaging in conversations with the AI, developers can quickly diagnose and resolve issues, even receiving tips for optimization. Security is crucial, so sensitive data like passwords and credentials must be filtered before analysis. A well-crafted prompt can instruct the AI to explain the root cause in a way any software engineer can understand, accelerating troubleshooting. This approach can significantly improve developer efficiency.


PricewaterhouseCoopers’ new CAIO – workers need to know their role with AI

“AI is becoming a natural part of everything we make and do. We’re moving past the AI exploration cycle, where managing AI is no longer just about tech, it is about helping companies solve big, important and meaningful problems that also drive a lot of economic value. “But the only way we can get there is by bringing AI into an organization’s business strategy, capability systems, products and services, ways of working and through your people. AI is more than just a tool — it can be viewed as a member of the team, embedding into the end-to-end value chain. The more AI becomes naturally embedded and intrinsic to an organization, the more it will help both the workforce and business be more productive and deliver better value. “In addition, we will see new products and services that are fully AI-powered come into the market — and those are going to be key drivers of revenue and growth.” ... You need to consider the bigger picture, understanding how AI is becoming integrated in all aspects of your organization. That means having your RAI leader working closely with your company’s CAIO (or equivalent) to understand changes in your operating model, business processes, products and services.


What Is Active Metadata and Why Does It Matter?

Active metadata’s ability to update automatically whenever the data it describes changes now extends beyond the data profile itself to enhance the management of data access, classification, and quality. Passive metadata’s static nature limits its use to data discovery, but the dynamic nature of active metadata delivers real-time insights into the data’s lineage to help automate data governance: Get a 360-degree view of data - Active metadata’s ability to auto-update ensures that metadata delivers complete and up-to-date descriptions of the data’s lineage, context, and quality. Companies can tell at a glance whether the data is being used effectively, appropriately, and in compliance with applicable regulations. Monitor data quality in real time - Automatic metadata updates improve data quality management by providing up-to-the-minute metrics on data completeness, accuracy, and consistency. This allows organizations to identify and respond to potential data problems before they affect the business. Patch potential governance holes - Active metadata allows data governance rules to be enforced automatically to safeguard access to the data, ensure it’s appropriately classified, and confirm it meets all data retention requirements. 


How to Get IT and Security Teams to Work Together Effectively

Successful collaboration requires a sense of shared mission, Preuss says. Transparency is crucial. "Leverage technology and automation to effectively share information and challenges across both teams," she advises. Building and practicing trust and communication in an environment that's outside the norm is also essential. One way to do so is by conducting joint business resilience drills. "Whether a cyber war game or an environmental crisis [exercise], resilience drills are one way to test the collaboration between teams before an event occurs." ... When it comes to cross-team collaboration, Scott says it's important for members to understand their communication style as well as the communication styles of the people they work with. "At Immuta, we do this through a DiSC assessment, which each employee is invited to complete upon joining the company." To build an overall sense of cooperation and teamwork, Jeff Orr, director of research, digital technology at technology research and advisory firm ISG, suggests launching an exercise simulation in which both teams are required to collaborate in order to succeed. 


Protecting national interests: Balancing cybersecurity and operational realities

A significant challenge we face today is safeguarding the information space against misinformation, disinformation, manipulation and deceptive content. Whether this is at the behest of nation-states, or their supporters, it can be immensely destabilising and disruptive. We must find a way to tackle this challenge, but this should not just focus on the responsibilities held by social media platforms, but also on how we can detect targeted misinformation, counter those narratives and block the sources. Technology companies have a key role in taking down content that is obviously malicious, but we need the processes to respond in hours, rather than days and weeks. More generally, infrastructure used to launch attacks can be spun up more quickly than ever and attacks manifest at speed. This requires the government to work more closely with major technology and telecommunication providers so we can block and counter these threats – and that demands information sharing mechanisms and legal frameworks which enable this. Investigating and countering modern transnational cybercrime demands very different approaches, and of course AI will undoubtedly play a big part in this, but sadly both in attack and defence.


How leading CIOs cultivate business-centric IT

With digital strategy and technology as the brains behind most business functions and operating models, IT organizations are determined to inject more business-centricity into their employee DNA. IT leaders have been burnishing their business acumen and embracing a non-technical remit for some time. Now, there’s a growing desire to infuse that mentality throughout the greater IT organization, stretching beyond basic business-IT alignment to creating a collaborative force hyper-fixated on channeling innovation to advance enterprise business goals. “IT is no longer the group in the rear with the gear,” says Sabina Ewing, senior vice president of business and technology services and CIO at Abbott Laboratories. ... While those with robust experience and expertise in highly technical areas such as cloud architecture or cybersecurity are still highly coveted, IT organizations like Duke Health, ServiceNow, and others are also seeking a very different type of persona. Zoetis, a leading animal health care company, casts a wider net when seeking tech and digital talent, focusing on those who are collaborative, passionate about making a difference, and adaptable to change. Candidates should also have a strong understanding of technology application, says CIO Keith Sarbaugh.



Quote for the day:

''When someone tells me no, it doesn't mean I can't do it, it simply means I can't do it with them.'' -- Karen E. Quinones Miller

Daily Tech Digest - September 03, 2024

Cloud application portability remains unrealistic

Enterprises can deploy an application across multiple cloud providers to distribute risk and reduce dependency on a single vendor. This strategy also offers leverage when negotiating terms or migrating services. It may prevent vendor lock-in and provide flexibility to optimize costs by leveraging the most cost-effective services available from different providers. That said, you’d be wrong if you think multicloud is the answer to a lack of portability. You’ll have to attach your application to native features to optimize them for the specific cloud provider. As I’ve said, portability has been derailed, and you don’t have good options. A “multiple providers” approach minimizes the negative impact but does not solve the portability problem. Build applications with portability in mind. This approach involves containerization technologies, such as Docker, and orchestration platforms, such as Kubernetes. Abstracting applications from the underlying infrastructure ensures they are compatible with multiple environments. Additionally, avoiding proprietary services and opting for open source tools can enhance portability and reduce costs associated with reconfigurations or migrations. 


Will Data Centers in Orbit Launch a New Phase of Sustainability?

Space offers an appealing solution for many of the problems that plague terrestrial data centers. Space-based data centers could use solar arrays to draw power from the sun, alleviating the burden on electrical grids here on Earth. They would not require water for cooling. They would not take up land, disturb people or wildlife. Additionally, natural disasters that can damage or wipe out data centers on Earth -- earthquakes, wildfires, floods, tsunamis -- are a non-issue in space. ... While the upsides of data centers in space are easy to imagine, what will it take to make them a reality? The Advanced Space Cloud for European Net zero emission and Data sovereignty (ASCEND) study set out to answer questions about space data centers technical feasibility and their environmental benefits. The study is funded by the European Commission as part of the Horizon Europe, a scientific research program. Thales Alenia Space led the study with a consortium of 11 partners, including research organizations and industrial companies from five European countries. Thales Alenia Space announced the results of the 16-month study at the end of June. 


Workload Protection in the Cloud: Why It Matters More Than Ever

CWP is a necessity that must not be ignored. As the adoption of cloud technology grows, the scale and complexity of threats also escalate. Here are the reasons why CWP is critical: Increased threat environment: Cyber threats are becoming more complex and frequent. CWP tools are crafted to detect and counter these changing threats in real time, delivering enhanced protection for cloud workloads exposed across various networks and environments. Protection against data breaches and compliance: Data breaches can lead to severe financial and reputational harm. CWP tools assist organizations in complying with strict regulations like GDPR, HIPAA, and PCI-DSS by implementing strong security protocols and compliance checks. Maintenance of operational integrity: It is essential for businesses to maintain the uninterrupted operation of their cloud workloads without being affected by security incidents. CWP tools offer extensive threat detection and automated responses, minimizing disruptions and upholding operational integrity. Cost implications: Security breaches can incur substantial costs. Investing in CWP tools helps avert these risks by early identification of vulnerabilities and threats, finally protecting organizations from potential financial losses due to breaches and service interruptions.


How Human-Informed AI Leads to More Accurate Digital Twins

The value of a DT is directly proportional to its accuracy, which in turn depends on the data available. But data availability remains a challenge — ironically, often in the business use cases that could benefit the most from DTs — and it’s a big reason why DTs are still in their infancy. DTs could help guide the expansion of current products to new market domains, accelerating R&D and innovation by enabling virtual experimentation. But research activities often involve exploring new territory where data is scarce or protected by patents owned by other organizations. For example, while DTs could inform an organization’s understanding of how a new topology may affect heavy construction equipment or how a smart building may behave under unusual weather conditions, there is limited data available about these new domains. ... DTs can add immense value by reducing costs and the time it takes to develop new processes, but data to develop these models is limited given that the work explores new territory. Further, data-sharing across the supply chain is sharply limited due to extreme sensitivity about intellectual property.


Leveraging AI for enhanced crime scene investigation

Importantly, as crimes are committed or solved, the algorithms and software based on them become more sophisticated. Interestingly, these algorithms use information obtained from various sources without any human intervention, reducing the chances of bias or error. With the increasing use of mobile phones and the internet, information is flooding in the form of photos, videos, audios, emails, letters, newspaper reports, speeches, social media posts, locations, and more. Various AI & ML-based algorithms are used to quickly analyse this data, perform mathematical transformations, draw inferences, and reach conclusions. This makes it possible to predict the likelihood of crimes in a very short time, which is almost impossible otherwise. A smart city-related company in Israel called ‘Cortica’ has developed software that analyzes the information obtained through CCTV. This software utilizes certain AI algorithms to recognize the faces in a crowd, identify crowd behavior and movement, and predict the likelihood and nature of a crime. Interestingly, these intelligent algorithms make it possible to analyze several terabytes of video footage in minimal time and make quite precise inferences. 


There are many reasons why companies struggle to exploit generative AI

Some qualitative remarks by executives interviewed revealed more detail on where that lack of preparedness lies. For example, a former vice president of data and intelligence for a media company told Rowan and team that the "biggest scaling challenge" for the company "was really the amount of data that we had access to and the lack of proper data management maturity." The executive continued: "There was no formal data catalog. There was no formal metadata and labeling of data points across the enterprise. We could go only as fast as we could label the data." ... Uncertainty about novel regulations is also causing companies to pause and think, Rowan and team stated in the report: "Organizations were exceedingly uncertain about the regulatory environment that may exist in the future (depending on the countries they operate in)." In response to both concerns, companies are pursuing a variety of strategies, Rowan and team found. These strategies include: "shut off access to specific Generative AI tools for staff"; "put in place guidelines to prevent staff from entering organizational data into public LLMs"; and "build walled gardens in private clouds with safeguards to prevent data leakage into the public cloud."


The role of behavioral biometrics in a world of growing cyberthreats

Behavioral biometrics might be an evolving form of biometric technology, but its foundations are already quite well established. For retail and ecommerce, for example, the lines blur slightly between the terms, ‘behavioral biometrics’ and ‘risk-based authentication’. Behavior in this sense isn’t just how people interact with their device, but the location they’re ordering from and to, or the time zone and time of day they’re looking to make a purchase. The extent of risk rises up and down relative to what is deemed ‘typical behavior’ in the broader sense and for that individual transaction. ‘Risk’ refers to the degree of confidence in authentication accuracy and will be key to the rise of behavioral biometrics in other industries too, including healthcare and banking where it is already being deployed to varying extents. It is more about the use case and whether the risk posed is suitable for passive authentication in these cases. In healthcare, for example, passive authentication wouldn’t be sufficient to access patient databases, but once logged in, it could help confirm that the same user is still active or online. ... Aside from the securitization element, behavioral biometrics can also enable improved personalization and marketing strategies. 


Data center sustainability is no longer optional

A recent empirical investigation conducted by the Borderstep Institute, in collaboration with the EU, revealed that digital technologies already account for approximately five-nine percent of global electricity consumption and carbon emissions, a number expected to increase as the demand for compute power, driven by the rise of generative artificial intelligence (gen AI) and foundation models, continues to grow. ... Databases are a significant contributor to data center workloads. They are critical for storing, managing, and retrieving large volumes of data, are computationally intensive, and significantly contribute to the overall energy consumption of data centers on thousands of database instances. Therefore, artificial intelligence database tuning will be central to any sustainability strategy to increase efficiency. ... Artificial intelligence database tuning offers a revolutionary approach to database management, enabling businesses to achieve high database performance while minimizing their environmental impact. By observing real-time data, AI can identify more effective PostgreSQL configurations that minimize energy usage. 


Building an Accessible Future in the Private Sector

Just like the public sector must make its services accessible to all groups, so must the private sector. Luckily, several regulations make accessibility a legal requirement for the private sector. The most notable is the Americans with Disabilities Act (ADA), a federal law passed in 1990 to prohibit discrimination against people with disabilities in many areas of public life. Title III of the ADA considers websites "public accommodations" and mandates that people with disabilities have equal access. However, true digital accessibility in the modern age needs to go further to ensure all digital products — websites, kiosks, mobile, and web applications — are equally accessible to people with disabilities. ... Companies leading the charge on accessibility are viewed as socially responsible and inclusive, attributes that matter to this generation of consumers. Organizations that value cultivating relationships with diverse customer groups often experience stronger customer loyalty. Brands like Apple and Microsoft are shining examples and have long been praised for providing inclusive technology and experiences. 


How to ensure cybersecurity strategies align with the company’s risk tolerance

One way for CISOs to align cybersecurity strategies with organizational risk tolerance is strategic involvement across the organization. “By forming risk committees and engaging in business discussions, CISOs can better understand and address the risks associated with new technologies and initiatives, and support the organization’s overall strategy,” Carmichael says. An information security committee is vital to this mission, according to Carl Grifka, MD of SingerLewak LLP, an advisory firm that specializes in risk and cybersecurity. “There needs to be a regular assessment of not just the cybersecurity environment, but also the risk tolerance and risk appetite, which is going to drive the controls that we’re going to put in place,” Grifka tells CSO. The committee operates as a cross-functional team that brings together different members of the business, including the executive, IT, security and maybe even a board representative on a more regular basis. Organizations low on the maturity level probably need to meet every couple of weeks, especially if they’re in a remediation phase and working to reduce gaps in the security posture. 



Quote for the day:

"Those who have succeeded at anything and don’t mention luck are kidding themselves." -- Larry King