Daily Tech Digest - September 11, 2024

Unlocking the Quantum Internet: Germany’s Latest Experiment Sets Global Benchmarks

“Comparative analysis with existing QKD systems involving SPS reveals that the SKR achieved in this work goes beyond all current SPS-based implementations. Even without further optimization of the source and setup performance, it approaches the levels attained by established decoy state QKD protocols based on weak coherent pulses.” The first author of the work, Dr. Jingzhong Yang remarked. The researchers speculate that QDs also offer great prospects for the realization of other quantum internet applications, such as quantum repeaters, and distributed quantum sensing, as they allow for inherent storage of quantum information and can emit photonic cluster states. The outcome of this work underscores the viability of seamlessly integrating semiconductor single-photon sources into realistic, large-scale, and high-capacity quantum communication networks. The need for secure communication is as old as humanity itself. Quantum communication uses the quantum characteristics of light to ensure that messages cannot be intercepted. “Quantum dot devices emit single photons, which we control and send to Braunschweig for measurement. This process is fundamental to quantum key distribution,” Ding said.


How AI Impacts Sustainability Opportunities and Risks

While AI can be applied to sustainability challenges, there are also questions around the sustainability of AI itself given technology’s impact on the environment. “We know that many companies are already dealing with the ramifications of increased energy usage and water usage as they're building out their AI models,” says Shim. ... As the AI market goes through its growing pains, chips are likely to become more efficient and use cases for the technology will become more targeted. But predicting the timeline for that potential future or simply waiting for it to happen is not the answer for enterprises that want to manage opportunities and risks around AI and sustainability now. Rather than getting caught up in “paralysis by analysis,” enterprise leaders can take action today that will help to actually build a more sustainable future for AI. With AI having both positive and negative impacts on the environment, enterprise leaders who wield it with targeted purpose are more likely to guide their organizations to sustainable outcomes. Throwing AI at every possible use case and seeing what sticks is more likely to tip the scales toward a net negative environmental impact. 


Agentic AI: A deep dive into the future of automation

Agentic AI combines classical automation with the power of modern large language models (LLMs), using the latter to simulate human decision-making, analysis and creative content. The idea of automated systems that can act is not new, and even a classical thermostat that can turn the heat and AC on and off when it gets too cold or hot is a simple kind of “smart” automation. In the modern era, IT automation has been revolutionized by self-monitoring, self-healing and auto-scaling technologies like Docker, Kubernetes and Terraform which encapsulate the principles of cybernetic self-regulation, a kind of agentic intelligence. These systems vastly simplify the work of IT operations, allowing an operator to declare (in code) the desired end-state of a system and then automatically align reality with desire—rather than the operator having to perform a long sequence of commands to make changes and check results. However powerful, this kind of classical automation still requires expert engineers to configure and operate the tools using code. Engineers must foresee possible situations and write scripts to capture logic and API calls that would be required. 


How to Make Technical Debt Your Friend

When a team identifies that they are incurring technical debt, they are basing that assessment on their theoretical ideal for the architecture of the system, but that ideal is just their belief based on assumptions that the system will be successful. The MVP may be successful, but in most cases its success is only partial - that is the whole point of releasing MVPs: to learn things that can be understood in no other way. As a result, assumptions about the MVA that the team needs to build also tend to be at least partially wrong. The team may think that they need to scale to a large number of users or support large volumes of data, but if the MVP is not overwhelmingly appealing to customers, these needs may be a long way off, if they are needed at all. For example, the team may decide to use synchronous communications between components to rapidly deliver an MVP, knowing that an asynchronous model would offer better scalability. However, the switch between synchronous and asynchronous models may never be necessary since scalability may not turn out to be an issue.


What CIOs should consider before pursuing CEO ambitions

The trend is encouraging, but it’s important to temper expectations. While CIOs have stepped up and delivered digital strategies for business transformation, using those successes as a platform to move into a CEO position could throw a curveball. Jon Grainger, CTO at legal firm DWF, says one key challenge is industrial constraints. “You’ve got to remember that, in a sector like professional services, there are things you’re going to be famous for,” he says. “DWF is famous for providing amazing legal services. And to do that, the bottom line is you’ve got to be a lawyer — and that’s not been my path.” He says CIOs can become CEOs, but only in the right environment. “If the question was rephrased to, ‘Jon, could you see yourself as a CEO?,’ then I would say, ‘Yes, absolutely.’ But I would say I’m unlikely to become the CEO of a legal services company because, ultimately, you’ve got to have the right skill set.” Another challenge is the scale of the transition. Compared to the longevity of other C-suite positions, technology leadership is an executive fledgling. Many CIOs — and their digital leadership peers, such as chief data or digital officers — are focused squarely on asserting their role in the business.


Immediate threats or long-term security? Deciding where to focus is the modern CISO’s dilemma

CISOs need to balance their budgets between immediate threat responses and long-term investments in cybersecurity infrastructure, says Eric O’Neill, national security strategist at NeXasure and a former FBI operative who helped capture former FBI special agent Robert Hanssen, the most notorious spy in US history. While immediate threats require attention, CISOs should allocate part of their budgets to long-term planning measures, such as implementing multi-factor authentication and phased infrastructure upgrades, he says. “This balance often involves hiring incident response partners on retainer to handle breaches, thereby allowing internal teams to focus on prevention and detection,” O’Neill says. “By planning phased rollouts for larger projects, CISOs can spread costs over time while still addressing immediate vulnerabilities.” Clare Mohr, US cyber intelligence lead at Deloitte, says a common approach is for CISOs to allocate 60 to 70% of their budgets to immediate threat response and the remainder to long-term initiatives –although this varies from company to company. “This distribution should be flexible and reviewed annually based on evolving threats,” she says. 


Would you let an AI robot handle 90% of your meetings?

“Let’s assume, fast-forward five or six years, that AI is ready. AI probably can help for maybe 90 per cent of the work,” he said. “You do not need to spend so much time [in meetings]. You do not have to have five or six Zoom calls every day. You can leverage the AI to do that.” Even more interestingly, Yuan alluded to your digital clone potentially being programmed to be better equipped to deal with areas you don’t feel confident in, for example, negotiating a deal during a sales call. “Sometimes I know I’m not good at negotiations. Sometimes I don’t join a sales call with customers,” he explained. “I know my weakness before sending a digital version of myself. I know that weakness. I can modify the parameter a little bit.” ... According to Microsoft’s 2024 Work Trend Index, 75 per cent of knowledge workers use AI at work every day. This is despite 46 per cent of those users not using it less than six months ago. ... However, leaders are lagging behind when it comes to incorporating AI productivity tools – 59 per cent worry about quantifying the productivity gains of AI and as a result, 78 per cent of AI users are bringing their own AI tools to work and 52 per cent who use AI at work are reluctant to admit to it for fear it makes them look replaceable.


Understanding the Importance of Data Resilience

Understanding an organization’s current level of data resilience is crucial for identifying areas that need improvement. Key indicators of data resilience include the Recovery Point Objective (RPO), which refers to the maximum acceptable amount of data loss measured in time. A lower RPO signifies a higher level of data resilience, as it minimizes the amount of data at risk during an incident. The Recovery Time Objective (RTO) is the target time for recovering IT and business activities after a disruption. A shorter RTO indicates a more resilient data strategy, as it enables quicker restoration of operations. Data integrity involves maintaining the accuracy and consistency of data over its lifecycle, implementing measures to prevent data corruption, unauthorized access, and accidental deletions. System redundancy, which includes having multiple data centers, failover systems, and cloud-based backups, ensures continuous data availability by providing redundant systems and infrastructure. Building sustainable data resilience requires a long-term commitment to continuous improvement and adaptation. 


Examining Capabilities-Driven AI

Organizations often respond to trends in technology by developing centralized organizations to adopt the underlying technologies associated with a trend. The industry has decades of experience demonstrating that centralized approaches to adopting technology result in large, centralized cost pools that generate little business value. Since the past is often a good predictor of the future, we expect that many companies will attempt to adopt AI by creating centralized organizations or “centers of excellence,” only to burn millions of dollars without generating significant business value. AI-enablement is much easier to accomplish within a capability than across an entire organization. Organizations can evaluate areas of weakness within a business capability, identify ways to either improve the customer experience and/or reduce the cost to serve, and target improvement levels. Once the improvement is quantified into an economic value, this value can be used to bound the build and operate cost of AI-enhanced capability. Benefit and cost parameters are important because knowledge engineering is often the largest cost associated with an AI-enabled business process. 


SOAR Is Dead, Long Live SOAR

While the core use case for SOAR remains strong, the combination of artificial intelligence, automation, and the current plethora of cybersecurity products will result in a platform that could take market share from SOAR systems, such as an AI-enabled next-generation SIEM, says Eric Parizo, managing principal analyst at Omdia. "SOC decision-makers are [not] going out looking to purchase orchestration and automation as much as they're looking to solve the problem of fostering a faster, more efficient TDIR [threat detection, investigation, and response] life cycle with better, more consistent outcomes," he says. "The orchestration and automation capabilities within standalone SOAR solutions are intended to facilitate those business objectives." AI and machine learning will continue to increasingly augment automation, says Sumo Logic's Clawson. While creating AI security agents that process data and automatically respond to threats is still in its infancy, the industry is clearly moving in that direction, especially as more infrastructure uses an "as-code" approach, such as infrastructure-as-code, he says. The result could be an approach that reduces the need for SOAR.



Quote for the day:

"Kind words do not cost much. Yet they accomplish much." -- Blaise Pascal

Daily Tech Digest - September 10, 2024

Will genAI kill the help desk and other IT jobs?

AI is transforming cybersecurity by automating threat detection, anomaly detection, and incident response. “AI-powered tools can quickly identify unusual behavior, analyze security pattern, scan for vulnerabilities, and even predict cyberattacks, making manual monitoring less necessary,” Foote said. “Security professionals will focus more on developing AI models that can defend against complex threats, especially as cybercriminals begin using AI to attack systems. There will be a demand for experts in AI ethics in cybersecurity, ensuring that AI systems used in security aren’t biased or misused.” IT support and systems administration positions — especially tier-one and tier-two help desk jobs — are expected to be hit particularly hard with job losses. Those jobs entail basic IT problem resolution and service desk delivery, as well as more in-depth technical support, such as software updates, which can be automated through AI today. The help desk jobs that remain would involve more hands-on skills that cannot be resolved by a phone call or electronic message.  ... Data scientists and analysts, on the other hand, will be in greater demand with AI, but their tasks will shift towards more strategic areas like interpreting AI-generated insights, ensuring ethical use of AI


Just-in-Time Access: Key Benefits for Cloud Platforms

Identity and access management (IAM) is a critical component of cloud security, and organizations are finding it challenging to implement it effectively. As businesses increasingly rely on multiple cloud environments, they face the daunting task of managing user identities across all their cloud systems. This requires an IAM solution that can support multiple cloud environments and provide a single source of truth for identity information. One of the most pressing challenges is the management of identities for non-human entities such as applications, services and APIs. IAM solutions must be capable of managing these identities, providing visibility, controlling access and enforcing security policies for non-human entities. ... Just-in-time (JIT) access is a fundamental security practice that addresses many of the challenges associated with traditional access management approaches. It involves granting access privileges to users for limited periods on an as-needed basis. This approach helps minimize the risk of standing privileges, which can be exploited by malicious actors. The concept of JIT access aligns with the principle of least privilege, which is essential for maintaining a robust security posture. 


Maximize Cloud Efficiency: What Is Object Storage?

Although object storage has existed in one form or another for quite some time, its popularity has surged with the growth of cloud computing. Cloud providers have made object storage more accessible and widespread. Cloud storage platforms generally favor object storage because it allows limitless capacity and scalability. Furthermore, object storage usually gets accessed via a RESTful API instead of conventional storage protocols like Server Message Blocks (SMB). This RESTful API access makes object storage easy to integrate with web-based applications. ... Object storage is typically best suited for situations where you need to store large amounts of data, especially when you need to store that data in the cloud. In cloud environments, block storage often stores virtual machines. File storage is commonly employed as a part of a managed solution, replacing legacy file servers. Of course, these are just examples of standard use cases. There are numerous other uses for each type of storage. ... Object storage is well-suited for large datasets, typically offering a significantly lower cost per gigabyte (GB). Having said that, many cloud providers sell various object storage tiers, each with its own price and performance characteristics. 


How human-led threat hunting complements automation in detecting cyber threats

IoBs are patterns that suggest malicious intent, even when traditional IoCs aren’t present. These might include unusual access patterns or subtle deviations from normal procedures that automated systems might miss due to the nature of rule-based detection. Human threat hunters excel at recognizing these anomalies through intuition, experience, and context. The combination of automation and human-led threat hunting ensures that all bases are covered. Automation handles the heavy lifting of data processing and detection of known threats, while human intelligence focuses on the subtle, complex, and context-dependent signals that often precede major security incidents. Together, they create a layered defense strategy that is comprehensive and adaptable. ... Skilled threat hunters are essential to a successful cybersecurity team. Their experience and deep understanding of adversarial tactics help to identify and respond to threats that would otherwise go unnoticed. Their intuition and ability to adapt quickly to new information also make them invaluable, especially when dealing with advanced persistent threats. However, the demand for skilled threat hunters far exceeds the supply. 


A critical juncture for public cloud providers

Enterprises are no longer limited to a single provider and can strategically distribute their operations to optimize costs and performance. This multicloud mastery reduces dependency on any specific vendor and emphasizes cloud providers’ need to offer competitive pricing alongside robust service offerings. There is something very wrong with how cloud providers are addressing their primary market. ... As enterprises explore their options, the appeal of on-premises solutions and smaller cloud providers becomes increasingly apparent. These alternatives, which I’ve been calling microclouds, often present customized services and transparent pricing models that align more closely with economic objectives. Indeed, with the surge of interest in AI, enterprises are turning to these smaller providers for GPUs and storage capabilities tailored to the AI systems they want to develop. They are often much less pricy, and many consider them more accessible than the public cloud behemoths roaming the land these days. Of course, Big Cloud quickly points out that it has thousands of services on its platform and is a one-stop shop for most IT needs, including AI. This is undoubtedly the case, and many entrepreneurs leverage public cloud providers for just those reasons. 


Two Letters, Four Principles: Mercedes Way of Responsible AI

In the past, intelligent systems were repeatedly the target of criticism. Such examples included chatbots using offensive language and discriminating facial recognition algorithms. These cases show that the use of AI requires some clear guidelines. "We adhere to stringent data principles, maintain a clear data vision and have a governance board that integrates our IT, engineering and sustainability efforts," said Renata Jungo Brüngger, member of the board of management for integrity, governance and sustainability at Mercedes-Benz Group, said during the company’s recent India sustainability dialogue 2024. AI is being applied to optimize supply chains, predictive vehicle maintenance and personalize customer interactions. Each of these use cases is developed with a strong focus on ethical considerations, ensuring that AI systems operate within a framework of privacy, fairness and transparency. ... "Data governance is the backbone of our AI strategy," said Umang Dharmik, senior vice president at Mercedes-Benz R&D (India). "There are stringent data governance frameworks to ensure responsible data management throughout its life cycle. This not only ensures compliance with global regulations but also fosters trust with our customers and stakeholders."


The Software Development Trends Challenging Security Teams

With the intense pace of development, chasing down each and every vulnerability becomes unfeasible – it is therefore not surprising to see prioritizing remediation top the list of challenges. Security teams can't afford to spend time, money, and effort fixing something that doesn't actually represent real risk to the organization. What's missing is contextual prioritization of the overall development environment in order to select which vulnerabilities to fix first based on the impact to the business. Security teams should aim to shift the focus to overall product security rather than creating silos for cloud security, application security, and other components of the software supply chain. ... Infrastructure as code use is exploding as developers look for ways to move faster. With IaC, developers can provision their own infrastructure without waiting for IT or operations. However, with increased use comes increased chance of misconfigurations. In fact, 67% of survey respondents noted that they are experiencing an increase in IaC template misconfigurations. These misconfigurations are especially dangerous because one flaw can proliferate easily and widely.


One of the best ways to get value for AI coding tools: generating tests

In our conversations with programmers, a theme that emerged is that many coders see testing as work they HAVE to do, not work they WANT to do. Testing is a best practice that results in a better final outcome, but it isn’t much fun. It’s like taking the time to review your answers after finishing a math test early: crucial for catching mistakes, but not really how you want to spend your free time. For more than a decade, folks have been debating the value of tests on our sites. ... The dislike some developers have for writing tests is a feature, not a bug, for startups working on AI-powered testing tools. CodiumAI is a startup which has made testing the centerpiece of its AI-powered coding tools. “Our vision and our focus is around helping verify code intent,” says Itamar Friedman. He acknowledges that many devs see testing as a chore. “I think many developers do not tend to add tests too much during coding because they hate it, or they actually do it because they think it's important, but they still hate it or find it as a tedious task.” The company offers an IDE extension, Codiumate, that acts as a pair programmer while you work: “We try to automatically raise edge cases or happy paths and challenge you with a failing test and explain why it might actually fail.”


Quantum Safe Encryption is Next Frontier in Protecting Sensitive Data

In the digital world that we live in today, cryptographic encryption and authentication are the de rigour techniques employed to secure data, communications, access to systems as well as digital interactions. Public-key cryptography is a widely prevalent technique used to secure digital infrastructure. Codes and keys used for encryption and authentication in these schemes are specific mathematical problems such as prime factorization that classical computers cannot solve in a reasonable time. ... As standards for the quantum era have been introduced, PQC-based solutions are being introduced in the market. Governments and organizations across the spectrum must move quickly to enhance their cyber resilience to tackle the challenges of the quantum era. The imperative is not only to prepare for an era of readily available powerful quantum computers that could attack incumbent systems but also devise mechanisms to deal with the imminent possibility of decryption of data secured by classical encryption techniques. This could be the existing encrypted data or those that were stolen prior to the availability of quantum-safe encryption standards and hoarded in anticipation of the availability of quantum computer assisted tools to crack them.


Want to get ahead? Four activities that can enable a more proactive security regime

As Goerlich notes, CISOs who want a more proactive program need to be looking into the future. To ensure he has time to do that, Goerlich schedules regular off-site meetings every quarter where he and his team ask what is changing. “This establishes a process and a cadence to get [us] out of the day-to-day activities so we can see the bigger picture,” he explains. “We start fresh and look at what’s coming in the next quarter. We ask what we need to be prepared for. We look back and ask what’s working and what’s not. Then we set goals so we can move forward.” Goerlich says he frequently invites outside security pros, such as vendor executives and other thought leaders, to these meetings to hear their insights into evolving threats as well as emerging security tools and techniques to counteract them. He also sometimes invites his executive colleagues from within his own organization, so that they can share details on their plans and strategies — a move that helps align security with the business needs as the organization moves forward. He has seen this effort pay off. He points to actions resulting from one particular off-site where the team identified challenges around its privilege access management (PAM) process and, more specifically, the number of manual steps it required.



Quote for the day:

"It takes courage and maturity to know the difference between a hoping and a wishing." -- Rashida Jourdain

Daily Tech Digest - September 09, 2024

Does your organization need a data fabric?

So, while real-time data integration and performing data transformations are key capabilities of data fabrics, their defining capability is in providing centralized, standardized, and governed access to an enterprise’s data sources. “When evaluating data fabrics, it’s essential to understand that they interconnect with various enterprise data sources, ensuring data is readily and rapidly available while maintaining strict data controls,” says Simon Margolis, associate CTO of AI/ML at SADA. “Unlike other data aggregation solutions, a functional data fabric serves as a “one-stop shop” for data distribution across services, simplifying client access, governance, and expert control processes.” Data fabrics thus combine features of other data governance and dataops platforms. They typically offer data cataloging functions so end-users can find and discover the organization’s data sets. Many will help data governance leaders centralize access control while providing data engineers with tools to improve data quality and create master data repositories. Other differentiating capabilities include data security, data privacy functions, and data modeling features.


The Crucial Role of Manual Data Annotation and Labeling in Building Accurate AI Systems

Automatic annotation systems frequently suffer from severe limitations, most notably accuracy. Despite its rapid evolution, AI can still misunderstand context, fail to spot complex patterns, and perpetuate inherent biases in data. For example, an automated annotation system may mislabel an image of a person holding an object because it is unable to handle complicated scenarios or objects that overlap. Similarly, in textual data, automated systems may misread cultural references, idiomatic expressions, or sentiments. ... Manual annotation, on the other hand, uses human expertise to label data, ensuring accuracy, context understanding, and bias reduction. Humans are naturally skilled at understanding ambiguity, context, and making sense of complex patterns that machines may not be able to grasp. This knowledge is critical in applications requiring absolute precision, such as healthcare diagnostics, legal document interpretation, and ethical AI deployment. Manual annotation adds a level of justice that automated procedures typically lack. Human annotators can recognize and mitigate biases in datasets, whether they be racial, gender-based, or cultural. 


AI orchestration: Crafting harmony or creating dependency?

In a collaborative relationship, both parties have an equal and complementary role. AI excels at processing enormous amounts of data, pattern recognition and certain types of analysis, while people excel at creativity, emotional intelligence and complex decision-making. In this relationship, the human keeps agency through critically evaluating AI outputs and making final decisions. However, this relationship can easily veer into dependency where we become unable or unwilling to perform tasks without AI help, even for tasks we could previously do independently. As AI outputs have become amazingly human-like and convincing, it is easy to accept them without critical evaluation or understanding, even when knowing the content may be a hallucination — an AI-generated output that appears convincing but is false or misleading. ... As AI continues to advance and become more indistinguishable from human interaction, the distinction between collaboration and dependency becomes increasingly blurred. Or worse, as leading historian Yuval Noah Harari — who is renowned for his works on the history and future of humankind points out — intimacy is a powerful weapon which can then be used to persuade us.


The deflating AI bubble is inevitable — and healthy

Predicting the future is generally a fool’s errand as Nobel Prize winning physicist, Niels Bohr recognized when he stated, “Prediction is very difficult, especially about the future.” This was particularly true in the early 1990s as the Web started to take off. Even internet pioneer and ethernet standard co-inventor Robert Metcalfe was doubtful of the internet’s viability when he predicted it had a 12-month future in 1995. Two years later, he literally ate his words at the 1997 WWW Conference when he blended a printed copy of his prediction with water and drank it. But there comes a point in a new technology when its potential benefits become clear even if the exact shape of its evolution is opaque. ... Many AI deployments and integrations are not revolutionary, however, but add incremental improvements and value to existing products and services. Graphics and presentation software provider Canva, for example, has integrated Google’s Vertex AI to streamline its video editing offering. Canva users can avoid a number of tedious editing steps to create videos in seconds rather than minutes or hours. And WPP, the global marketing services giant, has integrated Anthropic’s Claude AI service into its internal marketing system, WPP Open.


Blockchain And Quantum Computing Are On A Collision Course

Herman warns, “The real danger regarding the future of blockchain is that it’s used to build critical digital infrastructures before this serious security vulnerability has been fully investigated. Imagine a major insurance company putting at great expense all its customers into a blockchain-based network, and then three years later having to rip it all out to install a quantum-secure network, in its place.” Despite the bleak outlook, Herman offers a solution that lies within the very technology posing the threat. Quantum cryptography, particularly quantum random-number generators and quantum-resistant algorithms, could provide the necessary safeguards to protect blockchain networks from quantum attacks. “Quantum random-number generators are already being implemented today by banks, governments, and private cloud carriers. Adding quantum keys to blockchain software, and to all encrypted data, will provide unhackable security against both a classical computer and a quantum computer,” he notes. Moreover, the U.S. National Institute of Standards and Technology (NIST) has stepped in to address the issue by releasing standards for post-quantum cryptography. 


Low-Code Solutions Gain Traction In Banking And Insurance Digital Transformation

“Digital transformation should be focused on quick wins so that organizations can start seeing the ROI much sooner,” he said, noting that digital transformation is not just about adopting new technologies — it’s about fundamentally rethinking how businesses operate and deliver value to their customers. One of the recurring challenges he identified is the issue of onboarding in the banking sector. Despite variations in onboarding times from one bank to another, internal inefficiencies often cause delays. A portion of these delays stems from internal traffic rather than external factors. To address this, Arun MS advocated for a shift toward self-service portals, where customers can take control of processes like document submission. “Engaging customers as stakeholders in the process reduces internal bottlenecks and speeds up the overall timeline for onboarding,” he said. This approach not only enhances operational efficiency but also improves the customer experience, which is essential in an increasingly digital world. However, Arun MS was quick to caution that transferring processes to customers must be done thoughtfully.


Why We Need AI Professional Practice

AI’s capacity to learn, interpret, and abstract at scale alters how we navigate complex, manifestly unpredictable situations and solutions, and brings an ecosystem-scale vista of possibilities, challenges, and dependencies into view. It forces us to examine every aspect of the human condition and our increasing dependence on the tools we fashion. This is the pillar of “practice’, which will emerge from the need to harness both the immediate and indirect value advanced AI can bring. It is about direct interpretation, implementation, control, and effect, rather than indirect consideration, control, and effect. It is, in metaphorical terms then, about the rubber hitting the road.​ ... As we look at how AI will continue to shape the business landscape, we can see an element that hasn’t received much attention yet: how do we ensure that the right skills, best practices, and standards are developed and shared amongst those managing this AI revolution, and most importantly, how do we uphold the standard of that professional practice? Some voices liken the onset of AI to the invention of the Internet, which reflects the skills that are now required from staff, with new data showing that 66% of business leaders wouldn’t hire someone without AI skills.


AI cybersecurity needs to be as multi-layered as the system it’s protecting

By altering the technical design and development of AI before its training and deployment, companies can reduce their security vulnerabilities before they begin. For example, even selecting the correct model architecture has considerable implications, with each AI model exhibiting particular affinities to mitigate specific types of prompt injection or jailbreaks. Identifying the correct AI model for a given use case is important to its success, and this is equally true regarding security. Developing an AI system with embedded cybersecurity begins with how training data is prepared and processed. Training data must be sanitized and a filter to limit ingested training data is essential. Input restoration jumbles an adversary’s ability to evaluate the input-output relationship of an AI model by adding an extra layer of randomness. Companies should create constraints to reduce potential distortions of the learning model through Reject-On-Negative-Impact training. After that, regular security testing and vulnerability scanning of the AI model should be performed continuously. During deployment, developers should validate modifications and potential tampering through cryptographic checks. 


Kipu Quantum Team Says New Quantum Algorithm Outshines Existing Techniques

Kipu Quantum-led team of researchers announced the successful testing of what they’re labeling the largest quantum optimization problem on a digital quantum computer. They suggest that this is the start of the commercial quantum advantage era. ... Combinatorial optimization is critical in many industries, from logistics and scheduling to computational chemistry and biology. These problems, which involve finding the best or near-optimal solutions in large discrete configuration spaces, are known to be computationally challenging, particularly for classical computing. This complexity has driven the exploration of quantum optimization techniques as an alternative. ... While Kipu Quantum’s BF-DCQO algorithm shows promise, the results are based on simulations and experiments using specific quantum architectures. The 156-qubit experimental validation was performed on IBM’s heavy-hex processor, while the 433-qubit simulation is yet to be fully realized on physical hardware. There are still challenges in scaling the method to address more complex real-world HUBO problems that require larger quantum systems.


Inside the Mind of a Hacker: How Scams Are Carried Out

Hacking is, first and foremost, a mindset. It’s a likely avenue to pursue when you're endowed with an organized mind, a passion for IT, and a boundless curiosity about taking things apart and understanding their inner workings. Since highly publicized cases usually involve the theft of exorbitant sums, it’s logical for the public to assume that monetary gain is the top motivator. While it’s high on the list, studies that explore hacker motivation consistently rank the thrill of circumventing cyber defenses and the accompanying display of one’s mastery as chief driving forces. Hacking is both technical and creative. Successful hacks happen due to a combination of high technical prowess, the ability to grasp and implement novel solutions, and a general disregard for the consequences of those actions. ... The last step involves capitalizing on a hacker’s ill-gotten gains. Those who have managed to convince someone to transfer funds use mule accounts and money laundering schemes to eventually get a hold of them. Hackers who get their hands on a company’s industrial secrets may try to sell them to the competition. Data obtained through breaches finds its way to the dark web, where other hackers may purchase it in bulk.



Quote for the day:

"Listen with curiosity speak with honesty, act with integrity." -- Roy T. Benett

Daily Tech Digest - September 08, 2024

The hidden cost of speed

The software development engine within a company is like the power grid: it’s a given that it works, and there are no celebrations or accolades for keeping the lights on. When it fails or goes down, however, everyone’s upset and what’s left is assigning blame and determining culpability. Unfortunately, in many industries, the responsible application and development of software is not considered until there’s a problem. There is no “working well” for a developer in an ecosystem without insight and intuition as to how difficult the workload is for various projects or positions. The black and white reality is simply ”Working” or “Not working, what the hell is going on, do we need to fire them, why is everything so slow lately?” This can be incredibly frustrating for developers. In my own experience, the person in the worst position is the developer brought in to clean up another developer’s mess. It’s now your responsibility not only to convince management that they need to slow down to give you time to fix things (which will stall sales), but also to architect everything, orchestrate the rollout, and coordinate with sales goals and marketing. 


Tracing The Destructive Path of Ransomware's Evolution

Contemporary attackers carefully select high-value organizations and infrastructure to cripple until substantial ransoms are paid — frequently upwards of seven figures for large corporations, hospitals, pipelines, and municipalities. Present-day ransomware groups’ techniques reflect a chilling professionalization of tactics. They leverage military-grade encryption, identity-hiding cryptocurrencies, data-stealing side efforts, and penetration testing of victims before attacks to determine maximum tolerances. Hackers often gain initial entry by purchasing access to systems from underground brokers, then deploy multipart extortion schemes, including threatening distributed denial-of-service (DDoS) attacks, if demands aren’t promptly met. Ransomware perpetrators also tap advancements like artificial intelligence (AI) to accelerate attacks through malicious code generation, underground dark web communities to coordinate schemes, and initial access markets to reduce overhead. ... Ransomware groups continue to innovate their attack methods. Supply chain attacks have become increasingly common. By compromising a single software supplier, attackers can access the networks of thousands of downstream customers.


Zero-Touch Provisioning Simplifies and Augments State and Local Networks

“With zero-touch provisioning unlocking greater time efficiencies, these agencies can more optimally serve the public,” he says. “For example, research shows that shaving mere seconds off emergency response calls yields more lives saved.” Government agencies also can reach wider and broader audiences and increase constituent trust by delivering crucial food and mobile healthcare services faster. Even agencies with strong budgets can benefit from more efficient spending thanks to zero-trust networking, DePreta adds. “By eliminating the need for manual intervention, government agencies can optimize budgets to better serve their communities and become smarter in the way they deliver services. From public services such as mobile healthcare clinics to public safety activities such as emergency response and disaster relief, ZTP enables government agencies to do more with less,” he says. ... “You can take a couple of devices and ship them to a branch, and someone who is not necessarily a technical expert in that branch can unbox them and plug them in. You are then up and running right away,” DeBacker says.


Why employee ‘will’ can make or break transformations

Leaders who focus on making work more meaningful and expressing their appreciation inspire and motivate employees. Previous McKinsey research shows that executives at organizations who invest time and effort in changing employee mindsets from the start are four times more likely than those who didn’t to say their change programs were successful. Indeed, employees notice when their bosses don’t change their own behaviors to adapt to the goals of transformation. ... he best ideas for how to implement transformation initiatives may come from frontline employees who are closest to the customer. Organizations that encourage employees to pursue innovation and continuous improvement see a higher share of employees that own initiatives or reach milestones during transformations. ... Once leaders have elevated a core group of employees to own initiatives or milestones, they should turn to empowering a broader group to serve as role models who can activate others. These change leaders—influencers, managers, and supervisors—play a visible role in shaping and amplifying the behaviors that enhance organizational performance while counteracting behaviors that get in the way of success.


Deploying digital twins: 7 challenges businesses can face and how to navigate them

An organization adopting digital twins needs to be well-networked. "The biggest roadblock to digital systems is connectivity, at the network and human levels," Thierry Klein, president of Nokia Bell Labs Solutions Research, told ZDNET. "Digital twins are most effective when multiple digital twins are integrated, but this requires collaboration among stakeholders, a robust digital network, and systems that can be connected to the digital twin." ... The ability to represent physical environments in real time also presents challenges to digital twin environments. "With digital twins, you're generally relying on your model to run parallel with some real-life physical system so you can understand certain effects that might be impacting the system," Naveen Rao, vice president of AI for Databricks, told ZDNET. ... "The lack of open, interoperable data standards presents another significant roadblock. "Antiquated technology, legacy proprietary data formats, and analog processes create silos of 'dark data' -- or data that's inaccessible to teams across the asset lifecycle," Shelly Nooner, vice president of innovation and platform for Trimble, told ZDNET. 


Why CEOs and Corporate Boards Can’t Afford to Get AI Governance Wrong

The first step in preparing for safe and successful AI adoption is establishing the necessary C-Suite governance structures. This needs to be a point of urgency, as far more advanced and powerful AI capabilities, including Artificial General Intelligence (AGI), where AI may be able to perform human cognitive tasks better than the smartest human being, loom on the horizon. BCG published a leadership report earlier this year entitled “Every C-Suite Member Is Now a Chief AI Officer.” ... Corporate leadership and boards must determine how best to manage the risks and opportunities presented by AI to serve its customers and to protect its stakeholders. To begin with, they must identify where management responsibility should sit, and how these responsibilities should be structured. BCG’s report states that from the CEO on down, there needs to be at minimum, “a basic understanding of GenAI, particularly with respect to security and privacy risks,” adding that business leaders “must have confidence that all decisions strike the right balance between risk and business benefit.”


Get ready for a tumultuous era of GPU cost volitivity

Demand is almost certain to increase as companies continue to build AI at a rapid pace. Investment firm Mizuho has said the total market for GPUs could grow tenfold over the next five years to more than $400 billion, as businesses rush to deploy new AI applications. Supply depends on several factors that are hard to predict. They include manufacturing capacity, which is costly to scale, as well as geopolitical considerations — many GPUs are manufactured in Taiwan, whose continued independence is threatened by China. Supplies have already been scarce, with some companies reportedly waiting six months to get their hands on Nvidia’s powerful H100 chips. As businesses become more dependent on GPUs to power AI applications, these dynamics mean that they will need to get to grips with managing variable costs. ... To lock in costs, more companies may choose to manage their own GPU servers rather than renting them from cloud providers. This creates additional overhead but provides greater control and can lead to lower costs in the longer term. Companies may also buy up GPUs defensively: Even if they don’t know how they’ll use them yet, these defensive contracts can ensure they’ll have access to GPUs for future needs — and that their competitors won’t.


Optimizing Continuous Deployment at Uber: Automating Microservices in Large Monorepos

The newly designed system, named Up CD, was designed to improve automation and safety. It is tightly integrated with Uber's internal cloud platform and observability tools, ensuring that deployments follow a standardized and repeatable process by default. The new system prioritized simplicity and transparency, especially in managing monorepos. One key improvement was optimizing deployments by looking at which services were affected by each commit, rather than deploying every service with every code change. This reduced unnecessary builds and gave engineers more clarity over the changes impacting their services. ... Up introduced a unified commit flow for all services, ensuring that each service progressed through a series of deployment stages, each with its own safety checks. These conditions included time delays, deployment windows, and service alerts, ensuring deployments were triggered only when safe. Each stage operated independently, allowing flexibility in customizing deployment flows while maintaining safety. This new approach reduced manual errors and provided a more structured deployment experience.


Cybercriminals use legitimate software for attacks increasing

The report underscores the growing trend of attackers adopting legitimate tools to evade security measures and deceive security personnel. These tools are used for various malicious activities, including spreading ransomware, conducting network scanning, lateral movement within networks, and establishing command-and-control (C2) operations. Among the tools identified in the report are PDQ Deploy, PSExec, Rclone, SoftPerfect, AnyDesk, ScreenConnect, and WMIC. A series of case studies detailed in the report highlights specific incidents involving these tools. Between September 2023 and August 2024, 22 posts on various criminal forums discussed or shared cracked versions of the SoftPerfect network scanner. ... Remote management and monitoring (RMM) tools like AnyDesk and ScreenConnect are also prominently featured in criminal discussions. An August 2024 post on the RAMP forum described using AnyDesk during a penetration test and recommended disabling secure logon for successful connections. Initial Access Brokers (IABs) frequently sell access to networks through these established remote management and monitoring tool connections.


Principles of Modern Data Infrastructure

Designing a modern data infrastructure to fail fast means creating systems that can quickly detect and handle failures, improving reliability and resilience. If a system goes down, most of the time, the problem is with the data layer not being able to handle the stress rather than the application compute layer. While scaling, when one or more components within the data infrastructure fail, they should fail fast and recover fast. In the meantime, since the data layer is stateful, the whole fail-and-recovery process should minimize data inconsistency as well. ... By default, databases and data stores need to be able to respond quickly to user queries under heavy throughput. Users expect a real-time or near-real-time experience from all applications. Much of the time, even a few milliseconds, is too slow. For instance, a web API request may translate to one or a few queries to the primary on-disk database and then a few to even tens of operations to the in-memory data store. For each in-memory data store operation, a sub-millisecond response time is a bare necessity for an expected user experience.



Quote for the day:

Leaders must be good listeners. It's rule number one, and it's the most powerful thing they can do to build trusted relationships. - Lee Ellis

Daily Tech Digest - September 07, 2024

Why RAG Is Essential for Next-Gen AI Development

The success of RAG implementation often depends on a company’s willingness to invest in curating and maintaining high-quality knowledge sources. Failure to do this will severely impact RAG performance and may lead to LLM responses of much poorer quality than expected. Another difficult task that companies frequently run into is developing an effective retrieval mechanism. Dense retrieval, a semantic search technique, and learned retrieval, which involves the system recalling information, are two approaches that produce favorable results. Many companies need help integrating RAG into existing AI systems and scaling RAG to handle large knowledge bases. Potential solutions to these challenges include efficient indexing and caching and implementing distributed architectures. Another common problem is properly explaining the reasoning behind RAG-generated responses, as they often involve information taken from multiple sources and models. ... By integrating external knowledge sources, RAG helps LLMs prevail over the limitations of a parametric memory and dramatically reduce hallucinations. As Douwe Keila, an author of the original paper about RAG, said in a recent interview


A global assessment of third-party connection tampering

To be clear, there are many reasons a third party might tamper with a connection. Enterprises may tamper with outbound connections from their networks to prevent users from interacting with spam or phishing sites. ISPs may use connection tampering to enforce court or regulatory orders that demand website blocking to address copyright infringement or for other legal purposes. Governments may mandate large-scale censorship and information control. Despite the fact that everyone knows it happens, no other large operation has previously looked at the use of connection tampering at scale and across jurisdictions. We think that creates a notable gap in understanding what is happening in the Internet ecosystem, and that shedding light on these practices is important for transparency and the long-term health of the Internet. ... Ultimately, connection tampering is possible only by accident – an unintended side effect of protocol design. On the Internet, the most common identity is the domain name. In a communication on the Internet, the domain name is most often transmitted in the “server name indication (SNI)” field in TLS – exposed in cleartext for all to see.


The human brain deciphered and the first neural map created

The formation of such a neural map was made possible with the help of several technologies. First, as mentioned earlier, the employment of electron microscopy enabled the researchers to obtain images of the brain tissue at a scale that could capture details of synapses. Such papers provided the necessary level of detail to reveal how neurons are connected and can communicate with other neurons. Second, the massive volume of data produced by the imaging process needed high computing capability and machine learning to parse and analyze. It was also claimed that the company’s experience in AI and data processing was helpful in the correct positioning of the 2D images into a 3D one and in the proper segmentation of many of the parts of the brain tissue. Last of all, the decision to share the neural map as an open-access database has extended the potential for future research and cooperation in the sphere of neuroscience. The development of this neural map has excellent potential for neuroscience and other disciplines. In neuropharmacology, the map offers an opportunity to gain a substantial amount of information about how neurons are wired within the brain and how certain diseases, such as schizophrenia or autism, occur.


InfoQ AI, ML and Data Engineering Trends Report - September 2024

The AI-enabled agent programs are another area that’s seeing a lot of innovation. Autonomous agents and GenAI-enabled virtual assistants are coming up in different places to help software developers become more productive. AI-assisted programs can enable individual team members to increase productivity or collaborate with each other. Gihub’s Copilot, Microsoft Teams’ Copilot, DevinAI, Mistral’s Codestral, and JetBrains’ local code completion are some examples of AI agents. GitHub also recently announced its GitHub Models product to enable the large community of developers to become AI engineers and build with industry-leading AI models. ... With the emergence of multi-model language models like GPT-4o, privacy and security when handling non-textual data like videos become even more critical in the overall machine learning pipelines and DevOps processes. The podcast panelist’s AI safety and security recommendations are to have a comprehensive lineage and mapping of where your data is going. Train your employees to have proper data privacy security practices, and also make the secure path the path of least resistance for them so everyone within your organization easily adopts it.


Does it matter what kind of hard drive you use in a NAS?

Consumer drives aren't designed for heavier workloads, nor are they built with multiple units running adjacent to one another. This can cause issues with vibrations, particularly for 3.5-inch mechanical drives. Firmware and endurance are other concerns since the drives themselves won't be built with RAID and NAS in mind. Combining the two with heavier workloads through multiple user accounts and clients could lead to easier drive failure. These drives will be cheaper than their NAS equivalents, however, and no drive is immune to failure. You could see consumer drives outlive NAS drives inside the same enclosure. ... Shingled magnetic recording (SMR) and conventional magnetic recording (CMR) are two types of storage technologies used for storing data on spinning platters inside an HDD. CRM uses concentric circles (or tracks) for saving data, which are segmented into sectors. Everything is recorded linearly with each sector being written and read independently, allowing specific sectors to be rewritten without affecting any other sector on the drive. SMR is a newer technology that takes the same concentric circles approach but instead overlaps the tracks to bolster storage capacity but performance suffers alongside reliability.


What’s next in AI and HPC for IT leaders in digital infrastructure?

The AI nirvana for enterprises? In 2024, we'll see enterprises build ChatGPT-like GenAI systems for their own internal information resources. Since many companies' data resides in silos, there is a real opportunity to manage AI demand, build AI expertise, and cross-functional department collaboration. This access to data comes with an existential security risk that could strike at the heart of a company: intellectual property. That’s why in 2024, forward-thinking enterprises will use AI for robust data security and privacy measures to ensure intellectual property doesn’t get exposed on the public internet. They will also shrink the threat landscape by honing in on internal security risks. This includes the development of internal regulations to ensure sensitive information isn't leaked to non-privileged internal groups and individuals. ... At this early stage of AI initiatives, enterprises are dependent on technology providers and their partners to advise and support the global roll-out of AI initiatives. In Asia Pacific, it’s a race to build, deploy, and subsequently train the right AI clusters. Since a prime use case is cybersecurity threat detection, working with the respective cybersecurity technology providers is key.


Red Hat unleashes Enterprise Linux AI - and it's truly useful

In a statement, Joe Fernandes, Red Hat's Foundation Model Platform vice president, said, "RHEL AI provides the ability for domain experts, not just data scientists, to contribute to a built-for-purpose gen AI model across the hybrid cloud while also enabling IT organizations to scale these models for production through Red Hat OpenShift AI." RHEL AI isn't tied to any single environment. It's designed to run wherever your data lives -- whether it be on-premise, at the edge, or in the public cloud. This flexibility is crucial when implementing AI strategies without completely overhauling your existing infrastructure. The program is now available on Amazon Web Services (AWS) and IBM Cloud as a "bring your own (BYO)" subscription offering. In the next few months, it will be available as a service on AWS, Google Cloud Platform (GCP), IBM Cloud, and Microsoft Azure. Dell Technologies has announced a collaboration to bring RHEL AI to Dell PowerEdge servers. This partnership aims to simplify AI deployment by providing validated hardware solutions, including NVIDIA accelerated computing, optimized for RHEL AI.


Quantum computing is coming – are you ready?

The good thing is that awareness of the challenge is increasing. Some verticals, such as finance, have it absolutely top of mind with some already having quantum safe algorithms in production. Likewise, some manufacturing sectors are examining the impact, given the implications of having to upgrade embedded or IoT devices. And, of course, medical devices offer a particularly heightened security and trust challenge. "I think for these device manufacturers, they had a moment where they realized they can't go ahead and push the devices out as fast as they are without thinking about proper security," says Hojjati. But not everyone is on top of the problem. Which is why DigiCert is backing Quantum Readiness Day on September 26, to coincide with the expected finalization of the new algorithms by NIST. The worldwide event will bring together experts, both in how to break encryption and how to implement the upcoming post quantum algorithms, helping you make sure you're ahead of the problem. As Hojjati says, whether we've reached Q Day or not, "This is real, this is here, the standards have been released. ..."


How cyberattacks on offshore wind farms could create huge problems

Successful cyberattacks could lower public trust in wind energy and other renewables, the report from the Alan Turing Institute says. The authors add that artificial intelligence (AI) could help boost the resilience of offshore wind farms to cyber threats. However, government and industry need to act fast. The fact that offshore wind installations are relatively remote makes them particularly vulnerable to disruption. Land turbines can have nearby offices, so getting someone to visit the site is much easier than at sea. Offshore turbines tend to require remote monitoring and special technology for long distance communication. These more complicated solutions mean that things can go wrong more easily. ... Most cyberattacks are financially motivated, such as the ransomware attacks that have targeted the NHS in recent years. These typically block the users’ access to their computer data until a payment is made to the hackers. But critical infrastructure such as energy installations are also exposed. There may be various motivations for launching cyberattacks against them. One important possibility is that of a hostile state that wants to disrupt the UK’s energy supply – and perhaps also undermine public confidence in it.


Data Skills Gap Is Hampering Productivity; Is Upskilling the Answer?

"A well-crafted data strategy will highlight where specific skills need to be developed to achieve business objectives," said Michael Curry, president of data modernization at Rocket Software. He explained that since a data strategy typically involves both risk mitigation and value realization, it's important to consider skill gaps on both sides. Kjell Carlsson, head of AI strategy at Domino Data Labs, said better data prep, analysis, and visualization skills would help organizations become more data-driven and make better decisions that would significantly improve growth and curtail waste. "Imbuing your workforce with better prompt engineering skills will help them code, research, and write vastly more efficiently," he said. "A well-crafted data strategy will highlight where specific skills need to be developed to achieve business objectives," said Michael Curry, president of data modernization at Rocket Software. He explained that since a data strategy typically involves both risk mitigation and value realization, it's important to consider skill gaps on both sides. ... "Imbuing your workforce with better prompt engineering skills will help them code, research, and write vastly more efficiently," he said.



Quote for the day:

"Leadership should be born out of the understanding of the needs of those who would be affected by it." -- Marian Anderson

Daily Tech Digest - September 06, 2024

Quantum utility: The next milestone on the road to quantum advantage

“Quantum utility is a term that has only been coined recently, in the last 12 months or so. On the timeline that I’ve just described, there is a milestone that sits between where we are now and the beginning of this quantum advantage era. And that is this quantum utility concept. It’s basically where quantum computers are able to demonstrate, or in this case, in recent demonstrations, simulate a problem beyond the capabilities of just brute force classical computation using sufficiently large quantum computational devices. So, in this case, devices with more than 100 qubits,” she says. ... “It’s really an indication of how close we are to demonstrating quantum advantage, and where we can hopefully begin to see quantum computing computers serving as a scientific tool to explore a new scale of problems beyond brute force, classical simulation. So, it’s an indication of how close we are to quantum advantage and ideally, we’ll be hoping to see some demonstration of that in the next few years. No one really knows exactly when, but the idea is that those who are able to harness this era of quantum utility will also be among the first to achieve real quantum advantage as well.”


5 tips for switching to skills-based hiring

Skills come in a variety forms, such as hard skills, which comprise the technical skills necessary to complete tasks; soft skills, which center around a person’s interpersonal skills; and cognitive skills, which include problem solving, decision making, and logical reasoning, among other skills. Before embarking on a skills-based hiring strategy, it’s vital to have clear insight into the skills your organization already has internally, in addition to all the skills needed to complete projects and reach business goals. As you identify and categorize skills, it’s important to review job descriptions as well to ensure they’re up-to-date and don’t include any unnecessary skills or vague requirements. It’s crucial as well to evaluate how your job descriptions are written to ensure you’re drawing in the right talent for open roles. Wording job descriptions can be especially tricky when it comes to soft skills. For example, if your organization values someone who’s humble or savvy, you’ll need to identify how that translates to a skill you can list on a job description and, eventually, verify, says Hannah Johnson, senior VP for strategy and market development at IT trade association CompTIA.


Could California's AI Bill Be a Blueprint for Future AI Regulation?

“If approved, legislation in an influential state like California could help to establish industry best practices and norms for the safe and responsible use of AI,” Ashley Casovan, managing director, AI Governance Center at non-profit International Association of Privacy Professionals (IAPP), says in an email interview. California is hardly the only place with AI regulation on its radar. The EU AI Act passed earlier this year. The federal government in the US released an AI Bill of Rights, though this serves as guidance rather than regulation. Colorado and Utah enacted laws applying to the use of AI systems. “I expect that there will be more domain-specific or technology-specific legislation for AI emerging from all of the states in the coming year,” says Casovan. As quickly as it seems new AI legislation, and the accompanying debates, pops up, AI moves faster. “The biggest challenge here…is that the law has to be broad enough because if it's too specific maybe by the time it passes, it is already not relevant,” says Ruzzi. Another big part of the AI regulation challenge is agreeing on what safety in AI even means. “What safety means is…very multifaceted and ill-defined right now,” says Vartak.


Why and How to Secure GenAI Investments From Day Zero

Because GenAI remains a relatively novel concept that many companies are officially using only in limited contexts, it can be tempting for business decision-makers to ignore or downplay the security stakes of GenAI for the time being. They assume there will be time to figure how to secure large language models (LLMs) and mitigate data privacy risks later, once they’ve established basic GenAI use cases and strategies. Unfortunately, this attitude toward GenAI is a huge mistake, to put it mildly. It’s like learning to pilot a ship without thinking about what you’ll do if the ship sinks, or taking up a high-intensity sport without figuring out how to protect yourself from injury until you’ve already broken a bone. A healthier approach to GenAI is one in which organizations build security protections from the start. Here’s why, along with tips on how to integrate security into your organization’s GenAI strategy from day zero. ... GenAI security and data privacy challenges exist regardless of the extent to which an organization has adopted GenAI or which types of use cases it’s targeting. It’s not as if they only matter for companies making heavy use of AI or using AI in domains where special security, privacy or compliance risks apply.


US, UK and EU sign on to the Council of Europe’s high-level AI safety treaty

The high-level treaty sets out to focus on how AI intersects with three main areas: human rights, which includes protecting against data misuse and discrimination, and ensuring privacy; protecting democracy; and protecting the “rule of law.” Essentially the third of these commits signing countries to setting up regulators to protect against “AI risks.” The more specific aim of the treaty is as lofty as the areas it hopes to address. “The treaty provides a legal framework covering the entire lifecycle of AI systems,” the COE notes. “It promotes AI progress and innovation, while managing the risks it may pose to human rights, democracy and the rule of law. To stand the test of time, it is technology-neutral.” ... The idea seems to be that if AI does represent a mammoth change to how the world operates, if not watched carefully, not all of those changes may turn out to be for the best, so it’s important to be proactive. However there is also clearly nervousness among regulators about overstepping the mark and being accused of crimping innovation by acting too early or applying too broad a brush. AI companies have also jumped in early to proclaim that they, too, are just as interested in what’s come to be described as AI Safety. 


Fight Against Ransomware and Data Threats

Ransomware as a Service (RaaS) is becoming a massive industry. The tools to create ransomware attacks are readily available online, and it’s becoming easier for people even those with limited technical skills to launch attacks. We have the largest pool of software developers in the world, and unfortunately, a small portion of them see ransomware as a way to make easy money. There are even reports of recruitment drives in certain states to hire engineers or tech-savvy individuals to develop ransomware software. ... The industries most affected by ransomware tend to be those that are heavily regulated, such as BFSI (Banking, Financial Services, and Insurance), healthcare, and insurance. These industries deal with highly valuable, critical data, which makes them prime targets for attackers. Because of the sensitive nature of the data they handle, these organizations are often willing to pay the ransom to get it back. The reason these industries are so heavily regulated is that they’re dealing with data that is more critical than in other industries. Healthcare companies, for example, are regulated by agencies like the FDA in the U.S. and their Indian equivalent. Financial services are regulated by the RBI or SEBI in India. 


Cloud Security Assurance: Is Automation Changing the Game?

For cloud workloads, security assurance teams must assess and gather evidence for each component’s adherence to security standards, including for components and configurations the cloud provider runs. Luckily, cloud providers offer downloadable assurance and compliance certificates. These certificates and reports are essential for the cloud providers’ business. Larger customers, especially, work only with vendors that adhere to the standards relevant to these customers. The exact standards vary by the customers’ jurisdiction and industry. Figure 3 illustrates the extensive range of global, country-specific, and industry-specific standards Azure (for example) provides for download to their customers and prospects. ... These cloud security assurance reports cover the infrastructure layer and the security of the cloud provider’s IaaS, PaaS, and SaaS services. They do not cover customer-specific configurations, patching, or operations, including securing AWS S3 buckets against unauthorized access or patching VMs (Figure 4). Whether customers configure these services securely and put them adequately together is in the customers’ hands – and the customer security assurance team must validate that.


The Road from Chatbots and Co-Pilots to LAMs and AI Agents

We are beginning an evolution from knowledge-based, gen-AI-powered tools–say, chatbots that answer questions and generate content–to gen AI–enabled ‘agents’ that use foundation models to execute complex, multistep workflows across a digital world,” analysts with the consulting giant write. “In short, the technology is moving from thought to action.” AI agents, McKinsey says, will be able to automate “complex and open-ended use cases” thanks to three characteristics they possess, including: the capability to manage multiplicity; the capability to be directed by natural language; and the capability to work with existing software tools and platforms. ... “Although agent technology is quite nascent, increasing investments in these tools could result in agentic systems achieving notable milestones and being deployed at scale over the next few years,” the company writes. PC acknowledges that there are some challenges to building automated applications with the LAM architecture at this point. LLMs are probabilistic and sometimes can go off the rails, so it’s important to keep them on track by combining them with classical programming using deterministic techniques.


Are you ready for data hyperaggregation?

Data hyperaggregation is not simply a technological advancement. It’s a strategic initiative that aligns with the broader trend of digital transformation. Its ability to provide a unified view of disparate data sources empowers organizations to harness their data effectively, driving innovation and creating competitive advantages in the digital landscape. As the field continues to evolve, the fusion of data hyperaggregation with cutting-edge technologies will undoubtedly shape the future of cloud computing and enterprise data strategies. The problems and solutions related to enterprise data aggregation are familiar. Indeed, I wrote books about it in the 1990s. In 2024, we still can’t get it right. The problems have actually gotten much worse with the addition of cloud providers and the unwillingness to break down data silos within enterprises. Things didn’t get simpler, they got more complex. Now, AI needs access to most data sources that enterprises maintain. Because universal access methodologies still don’t exist, we invented a new buzzword, “data hyperaggregation.” If this iteration of data gathering catches on, we get to solve the disparate data problem for more reasons than just AI. I hold out hope. Am I naive? We’ll see.


Unlock Business Value Through Effective DevOps Infrastructure Management

Whatever mix of architectures an organization uses, however, the best strategy is rooted in their specific needs, focusing on profitability and customer satisfaction. Overly complex systems not only cost more, but they also reduce the return on investment (ROI) and efficiency. Innovation delivers services to customers faster and more efficiently than before. With the plethora of technologies available today, it's imperative for organizations to be clear about what provides real value to reduce the cost and time spent on infrastructure issues. ... Adopting DevOps infrastructure management practices encourages the use of solutions like IaC, making deployments more repeatable, scalable, and reliable. Automation and continuous monitoring free up resources to focus on a broader range of tasks, including security, developer experience, and time to market. Robust documentation processes are critical to preserve this culture of continuous improvement, efficiency, and productivity over time. Should a project be handed to a new team, documentation helps maintain continuity and can reveal historical inefficiencies or issues. 



Quote for the day:

“People are not lazy. They simply have important goals – that is, goals that do not inspire them.” -- Tony Robbins