Daily Tech Digest - September 16, 2024

AI Ethics – Part I: Guiding Principles for Enterprise

The world has now caught up to what was previously science fiction. We are now designing AI that is in some ways far more advanced than anything Isaac Asimov could have imagined, while at the same time being far more limited. Even though they were originally conceived as fictional principles, there have been efforts to adapt and enhance Isaac Asimov’s Three Laws of Robotics to fit modern enterprise AI-based solutions. Here are some notable examples: Human-Centric AI Principles - Modern AI ethics frameworks often emphasize human safety and well-being, echoing Asimov’s First Law. ... Ethical AI Guidelines - Enterprises are increasingly developing ethical guidelines for AI that align with Asimov’s Second Law. These guidelines ensure that AI systems obey human instructions while prioritizing ethical considerations. ... Bias Mitigation and Fairness - In line with Asimov’s Third Law, there is a strong focus on protecting the integrity of AI systems. This includes efforts to mitigate biases and ensure fairness in AI outputs. ... Enhanced Ethical Frameworks - Some modern adaptations include additional principles, such as the “Zeroth Law,” which prioritizes humanity’s overall well-being. 


Power of Neurodiversity: Why Software Needs a Revolution

Neurodiversity, which includes ADHD, autism spectrum disorder, and dyslexia, presents unique challenges for individuals, yet it also comes with many unique strengths. People on the autism spectrum often excel in logical thinking, while individuals with ADHD can demonstrate exceptional attention to detail when engaged in areas of interest. Those with dyslexia frequently display creative thinking skills. However, software design often fails to accommodate neurodiverse users. For example, websites or apps with cluttered interfaces can overwhelm users with ADHD, while those sites that rely heavily on text make it harder for individuals with dyslexia to process information. Additionally, certain sounds or colors, such as bright colors, may be overwhelming for someone with autism. Users do not have to adapt to poorly designed software. Instead, software designers must create products designed to meet these user needs. Waiting to receive software accessibility training on the job may be too late, as software designers and developers will need to relearn foundational skills. Moreover, accessibility still does not seem to be a priority in the workplace, with most job postings for relevant positions not requiring these skills.


Protect Your Codebase: The Importance of Provenance

When you know that provenance is a vector for a software supply chain attack, you can take action to protect it. The first step is to collect the provenance data for your dependencies, where it exists; projects that meet SLSA level 1 or higher produce provenance data you can inspect and verify. Ensure that trusted identities generate provenance. If you can prove that provenance data came from a system you own and secured or from a known good actor, it’s easier to trust. Cryptographic signing of provenance records provides assurance that the record was produced by a verifiable entity — either a person or a system with the appropriate cryptographic key. Store provenance data in a write-once repository. This allows you to verify later if any provenance data was modified. Modification, whether malicious or accidental, is a warning sign that your dependencies have been tampered with somehow. It’s also important to protect the provenance you produce for yourself and any downstream users. Implement strict access and authentication controls to ensure only authorized users can modify provenance records. 


Are You Technical or Non-Technical? Time to Reframe the Discussion

The term “technical” can introduce bias into hiring and career development, potentially leading to decisions swayed more by perception than by a candidate’s qualifications. Here, hiring decisions can sometimes reflect personal biases if candidates do not fit a stereotypical image or lack certain qualifications not essential for the role. For instance, a candidate might be viewed as not technical enough if they lack server administration experience, even when the job primarily involves software development. Unconscious bias can skew evaluations, leading to decisions based more on perceptions than actual skills. To address this issue, it is important to clearly define the skills required for a position. For example, rather than broadly labeling a candidate as “not technical enough,” it is more effective to specify areas for improvement, such as “needs advanced database management skills.” This approach not only highlights areas where candidates excel, such as developing user-centric reports, but also clarifies specific shortcomings. Clearly stating requirements, such as “requires experience building scalable applications with technology Y,” enhances the transparency and objectivity of the hiring process.


Will Future AI Demands Derail Sustainable Energy Initiatives?

The single biggest thing enterprises are doing to address energy concerns is moving toward more energy efficient second-generation chips, says Duncan Stewart, a research director with advisory firm Deloitte Technology, via email. "These chips are a bit faster at accelerating training and inference -- about 25% better than first-gen chips -- and their efficiency is almost triple that of first-generation chips." He adds that almost every chipmaker is now targeting efficiency as the most important chip feature In the meantime, developers will continue to play a key role in optimizing AI energy needs, as well as validating whether AI is even required to achieve a particular outcome. "For example, do we need to use a large language model that requires lots of computing power to generate an answer from enormous data sets, or can we use more narrow and applied techniques, like predictive models that require much less computing because they’ve been trained on much more specific and relevant data sets?" Warburton asks. "Can we utilize compute instances that are powered by low-carbon electricity sources?


When your cloud strategy is ‘it depends’

As for their use of private cloud, some of the rationale is purely a cost calculation. For some workloads, it’s cheaper to run on premises. “The cloud is not cheaper. That’s a myth,” one of the IT execs told me, while acknowledging cost wasn’t their primary reason for embracing cloud anyway. I’ve been noting this for well over a decade. Convenience, not cost, tends to drive cloud spend—and leads to a great deal of cloud sprawl, as Osterman Research has found. ... You want developers, architects, and others to feel confident with new technology. You want to turn them into allies, not holdouts. Jassy declared, “Most of the big initial challenges of transforming the cloud are not technical” but rather “about leadership—executive leadership.” That’s only half true. It’s true that developers thrive when they have executive air cover. This support makes it easier for them to embrace a future they likely already want. But they also need that executive support to include time and resources to learn the technologies and techniques necessary for executing that new direction. If you want your company to embrace new directions faster, whether cloud or AI or whatever it may be, make it safe for them to learn. 


4 steps to shift from outputs to outcomes

Shifting the focus to outcomes — business results aligned with strategic goals — was the key to unlocking value. David had to teach his teams to see the bigger picture of their business impact. By doing this, every project became a lever to achieve revenue growth, cost savings, and customer satisfaction, rather than just another task list. Simply being busy doesn’t mean a project is successful in delivering business value, yet many teams proudly wear busy badges, leaving executives wondering why results aren’t materializing. Busy doesn’t equal productive. In fact, busy gets in the way of being productive. ... A common issue is project teams lose sight of how their work aligns with the company’s broader goals. When David took over, his teams were still disconnected from those strategic objectives, but by revisiting them and ensuring that every project directly supported those goals, the teams could finally see they were part of something much larger than just a list of tasks. Many business leaders think their teams are mind readers. They hold a town hall, send out a slide deck, and then expect everyone to get it. But months later, they’re surprised when the strategy starts slipping through their fingers.


Is Your Business Ready For The Inevitable Cyberattack?

Cybersecurity threats are inevitable, making it essential for businesses to prepare for the worst. The critical question is: if your business is hacked, is your data protected, and can you recover it in hours rather than days or weeks? If not, you are leaving your business vulnerable to severe disruptions. While everyone emphasises the importance of backups, the real challenge lies in ensuring their integrity and recoverability. Are your backups clean? Can you quickly restore data without prolonged downtime? The total cost of ownership (TCO) of your data protection strategy over time is a crucial consideration. Traditional methods, such as relying on Iron Mountain for physical backups, are cumbersome and time-bound, requiring significant effort to locate and restore data. ... The story of data storage, much like the shift to cloud computing, revolves around strategically placing the right parts of your business operations in the most suitable locations at the right times. Data protection follows the same principle. Resilience is still a topic of frequent discussion, yet its broad nature makes it challenging to establish a clear set of best practices.


Digital twin in the hospitality industry-Innovations in planning & designing a hotel

The Metaverse is revolutionising how it became a factual virtual reality tour of rooms and services experienced by guests during their visit, for which the guest is provided the chance to preview before booking. Moreover, hotels can provide tailored virtual experiences through interactive concierge services and bespoke room décor options. More events will be held through immersive games and entertaining interactivity, bringing better visitor experiences to the hospitality industry. It can generate revenue through tickets, sponsorships, and virtual item sales. ... Operational efficiency is the bottom line of hospitality, where everything seems small but matters so much for guest satisfaction. Imagine the case where the HVAC system of a hotel or its lighting is controlled by some model of a digital twin. Managers will thus understand the energy consumption patterns and predict what will require maintenance so they can change those settings accordingly based on real-time data. Digital twins enable staff and resources to be trained better. Staff can be comfortable with changes in procedures and layout beforehand by interacting with the virtual model. 


The cybersecurity paradigm shift: AI is necessitating the need to fight fire with fire

Organisations should be prepared for the worst-case scenario of a cyber-attack to establish cyber resilience. This involves being able to protect and secure data, detect cyber threats and attacks, and respond with automated data recovery processes. Each element is critical to ensuring an organization can maintain operational integrity under attack. ... However, the reality is that many organisations are unable to keep up. From the company's recent survey released in late January 2024, 79% of IT and security decision-makers said they did not have full confidence in their company’s cyber resilience strategy. Just 12% said their data security, management, and recovery capabilities had been stress tested in the six months prior to being surveyed. ... To bolster cyber resilience, companies must integrate a robust combination of people, processes, and technology. Fostering a skilled workforce equipped to detect and respond to threats effectively starts with having employee education and training in place to keep pace with the rising sophistication of AI-driven phishing attacks.



Quote for the day:

"If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work." -- Thomas J. Watson

Daily Tech Digest - September 15, 2024

Data Lakes Evolve: Divisive Architecture Fuels New Era of AI Analytics

“Data lakes led to the spectacular failure of big data. You couldn’t find anything when they first came out,” Sanjeev Mohan, principal at the SanjMo tech consultancy, told Data Center Knowledge. There was no governance or security, he said. What was needed were guardrails, Mohan explained. That meant safeguarding data from unauthorized access and respecting governance standards such as GDPR. It meant applying metadata techniques to identify data. “The main need is security. That calls for fine-grained access control – not just throwing files into a data lake,” he said, adding that better data lake approaches can now address this issue. Now, different personas in an organization are reflected in different permissions settings. ... This type of control was not standard with early data lakes, which were primarily “append-only” systems that were difficult to update. New table formats changed this. Table formats like Delta Lake, Iceberg, and Hudi have emerged in recent years, introducing significant improvements in data update support. For his part, Sanjeev Mohan said standardization and wide availability of tools like Iceberg give end-users more leverage when selecting systems. 


Data at the Heart of Digital Transformation: IATA's Story

It's always good to know what the business goals are, from a strategic perspective, which informs the data that is needed to enable digital transformation. Data is at the heart of digital transformation. Business strategy comes first and then data strategy, followed by technology strategy. At IATA, we formed the Data Steering Group and identified critical datasets across the organization. We then set up a data catalog and established a governance structure. This was followed by the launch of the Data Governance Committee and the role of a chief data officer. We're going to be implementing an automated data catalog and some automation tools around data quality. Data governance has allowed us to break down data silos. It has also enabled us to establish IATA's industry data strategy. We treat data as an asset, and that data is not owned by any particular division but looked at holistically at the organizational level. And that has allowed us opportunities to do some exciting things in the AI and analytics space and even in the way we deal with our third-party data suppliers and member airlines.


New Android Warning As Hackers Install Backdoor On 1.3 Million TV Boxes

"This is a clear example of how IoT devices can be exploited by malicious actors,” Ray Kelly, fellow at the Synopsys Software Integrity Group, said, “the ability of the malware to download arbitrary apps opens the door to a range of potential threats.” Everything from a TV box botnet for use in distributed denial of service attacks through to stealing account credentials and personal information. Responsibility for protecting users lies with the manufacturers, Kelly said, they must “ensure their products are thoroughly tested for security vulnerabilities and receive regular software updates.” "These off-brand devices discovered to be infected were not Play Protect certified Android devices,” a Google spokesperson said, “If a device isn't Play Protect certified, Google doesn’t have a record of security and compatibility test results.” Whereas these Play Protect certified devices have undergone testing to ensure both quality and user safety, other boxes may not have done. “To help you confirm whether or not a device is built with Android TV OS and Play Protect certified, our Android TV website provides the most up-to-date list of partners,” the spokesperson said.


Engineers Day: Top 5 AI-powered roles every engineering graduate should consider

Generative AI engineer: They play a pivotal role in analysing vast datasets to extract actionable insights and drive data-informed decision-making processes. This role demands a comprehensive understanding of statistical analysis, machine learning techniques, and programming languages such as Python and R. ... AI research scientist: They are at the forefront of advancing AI technologies through groundbreaking research and innovation. With a robust mathematical background, professionals in this role delve into programming languages such as Python and C++, harnessing the power of deep learning, natural language processing, and computer vision to develop cutting-edge solutions. ... Machine Learning engineer: Machine learning engineers are tasked with developing cutting-edge machine learning models and algorithms to address complex problems across various industries. To excel in this role, professionals must develop a strong proficiency in programming languages such as Python, along with a deep understanding of machine learning frameworks like TensorFlow and PyTorch. Expertise in data preprocessing techniques and algorithm development is also quite crucial here. 


Kubernetes attacks are growing: Why real-time threat detection is the answer for enterprises

Attackers are ruthless in pursuing the weakest threat surface of an attack vector, and with Kubernetes containers runtime is becoming a favorite target. That’s because containers are live and processing workloads during the runtime phase, making it possible to exploit misconfigurations, privilege escalations or unpatched vulnerabilities. This phase is particularly attractive for crypto-mining operations where attackers hijack computing resources to mine cryptocurrency. “One of our customers saw 42 attempts to initiate crypto-mining in their Kubernetes environment. Our system identified and blocked all of them instantly,” Gil told VentureBeat. Additionally, large-scale attacks, such as identity theft and data breaches, often begin once attackers gain unauthorized access during runtime where sensitive information is used and thus more exposed. Based on the threats and attack attempts CAST AI saw in the wild and across their customer base, they launched their Kubernetes Security Posture Management (KSPM) solution this week. What is noteworthy about their approach is how it enables DevOps operations to detect and automatically remediate security threats in real-time. 


Begun, the open source AI wars have

Open source leader julia ferraioli agrees: "The Open Source AI Definition in its current draft dilutes the very definition of what it means to be open source. I am absolutely astounded that more proponents of open source do not see this very real, looming risk." AWS principal open source technical strategist Tom Callaway said before the latest draft appeared: "It is my strong belief (and the belief of many, many others in open source) that the current Open Source AI Definition does not accurately ensure that AI systems preserve the unrestricted rights of users to run, copy, distribute, study, change, and improve them." ... Afterwards, in a more sorrowful than angry statement, Callaway wrote: "I am deeply disappointed in the OSI's decision to choose a flawed definition. I had hoped they would be capable of being aspirational. Instead, we get the same excuses and the same compromises wrapped in a facade of an open process." Chris Short, an AWS senior developer advocate, Open Source Strategy & Marketing, agreed. He responded to Callaway that he: "100 percent believe in my soul that adopting this definition is not in the best interests of not only OSI but open source at large will get completely diluted."


What North Korea’s infiltration into American IT says about hiring

Agents working for the North Korean government use stolen identities of US citizens, create convincing resumes with generative AI (genAI) tools, and make AI-generated photos for their online profiles. Using VPNs and proxy servers to mask their actual locations — and maintaining laptop farms run by US-based intermediaries to create the illusion of domestic IP addresses — the perpetrators use either Western-based employees for online video interviews or, less successfully, real-time deepfake videoconferencing tools. And they even offer up mailing addresses for receiving paychecks. ... Among her assigned tasks, Chapman maintained a PC farm of computers used to simulate a US location for all the “workers.” She also helped launder money paid as salaries. The group even tried to get contractor positions at US Immigration and Customs Enforcement and the Federal Protective Services. (They failed because of those agencies’ fingerprinting requirements.) They did manage to land a job at the General Services Administration, but the “employee” was fired after the first meeting. A Clearwater, FL IT security company called KnowBe4 hired a man named “Kyle” in July. But it turns out that the picture he posted on his LinkedIn account was a stock photo altered with AI. 


Contesting AI Safety

The dangers posed by these machines arise from the idea that they “transcend some of the limitations of their designers.” Even if rampant automation and unpredictable machine behavior may destroy us, the same technology promises unimaginable benefits in the far future. Ahmed et al. describe this epistemic culture of AI safety that drives much of today’s research and policymaking, focused primarily on the technical problem of aligning AI. This culture traces back to the cybernetics and transhumanist movements. In this community, AI safety is understood in terms of existential risks—unlikely but highly impactful events, such as human extinction. The inherent conflict between a promised utopia and cataclysmic ruin characterizes this predominant vision for AI safety. Both the AI Bill of Rights and SB 1047 assert claims about what constitutes a safe AI model but fundamentally disagree on the definition of safety. A model deemed safe under SB 1047 might not satisfy the Safe and Effective principle of the White House AI Blueprint; a model that follows the AI Blueprint could cause critical harm. What does it truly mean for AI to be safe? 


Why Companies Should Embrace Ethical Hackers

Security researchers (or hackers, take your pick) are generally good people motivated by curiosity, not malicious intent. Making guesses, taking chances, learning new things, and trying and failing and trying again is fun. The love of the game and ethical principles are two separate things, but many researchers have both in spades. Unfortunately, the government has historically sided with corporations. Scared by the Matthew Broderick movie WarGames plot, Ronald Reagan initiated legislation that resulted in the Computer Fraud and Abuse Act of 1986 (CFAA). Good-faith researchers have been haunted ever since. Then there is The Digital Millennium Copyright Act (DMCA) of 1998, which made it explicitly illegal to “circumvent a technological measure that effectively controls access to a work protected under [copyright law],” something necessary to study many products. A narrow harbor for those engaging in encryption research was carved out in the DMCA, but otherwise, the law put researchers further in danger of legal action against them. All this naturally had a chilling effect as researchers grew tired of being abused for doing the right thing. Many researchers stopped bothering with private disclosures to companies with vulnerable products and took their findings straight to the public. 


Why AI Isn't Just Hype - But A Pragmatic Approach Is Required

It is far better to take a pragmatic view where you open yourself up to the possibilities but proceed with both caution and some help. That must start with working through the buzzwords and trying to understand what people mean, at least at a top level, by an LLM or a vector search or maybe even a Naive Bayes algorithm. But then, it is also important to bring in a trusted partner to help you move to the next stage to build an amazing new digital product, or to undergo a digital transformation with an existing digital product. Whether you’re in start-up mode, you are already a scale-up with a new idea, or you’re a corporate innovator looking to diversify with a new product – whatever the case, you don’t want to waste time learning on the job, and instead want to work with a small, focused team who can deliver exceptional results at the speed of modern digital business. ... Whatever happens or doesn’t happen to GenAI, as an enterprise CIO you are still going to want to be looking for tech that can learn and adapt from circumstance and so help you do the same. At the end of the day, hype cycle or not, AI is really the one tool in the toolbox that can continuously work with you to analyse data in the wild and in non-trivial amounts.



Quote for the day:

"Your attitude is either the lock on or key to your door of success." -- Denis Waitley

Daily Tech Digest - September 14, 2024

Three Critical Factors for a Successful Digital Transformation Strategy

Just as important as the front-end experience are the back-end operations that keep and build the customer relationship. Value-added digital services that deliver back-end operational excellence can improve the customer experience through better customer service, improved security and more. Emerging tech like artificial intelligence can substantially improve how companies get a clearer view into their operations and customer base. Take data flow and management, for example. Many executives report they are swimming in information, yet around half admit they struggle analyzing it, according to research by Paynearme. While data is important, the insights derived from that data are key to the conclusions executives must draw. Maintaining a digital record of customer information, transaction history, spend behaviors and other metrics and applying AI to analyze and inform decisions can help companies provide better service and protect their end users. They can streamline customer service, for instance, by immediately sourcing relevant information and delivering a resolution in near-real time, or by automating the analysis of spend behavior and location data to shut down potential fraudsters.


AI reshaping the management of remote workforce

In a remote work setting, one of the biggest challenges for organizations remains in streamlining of operations. For a scattered team, the implementation of AI emerges as a revolutionary tool in automating shift and rostering using historical pattern analytics. Historical data on staff availability, productivity, and work patterns enable organizations to optimise schedules and strike a perfect balance between operational needs and employee preferences. Subsequently, this reduces conflicts and enhances overall work efficiency. Apart from this, AI analyses staff work duration and shifts that further enable organizations to predict staffing needs and optimise resource allocation. This enhances capacity modelling to ensure the right team member is available to handle tasks during peak times, preventing overstaffing or understaffing issues. ... With expanding use cases, AI-powered facial recognition technology has become a critical part of identity verification and promoting security in remote work settings. Organisations need to ensure security and confidentiality at all stages of their work. In tandem, AI-powered facial recognition ensures that only authorized personnel have access to the company’s sensitive systems and data. 


The DPDP act: Navigating digital compliance under India’s new regulatory landscape

Adapting to the DPDPA will require tailored approaches, as different sectors face unique challenges based on their data handling practices, customer bases, and geographical scope. However, some fundamental strategies can help businesses effectively navigate this new regulatory landscape. First, conducting a comprehensive data audit is essential. Businesses need to understand what data they collect, where it is stored, and who has access to it. Mapping out data flows allows organizations to identify risks and address them proactively, laying the groundwork for robust compliance. Appointing a Data Protection Officer (DPO) is another critical step. The DPO will be responsible for overseeing compliance efforts, serving as the primary point of contact for regulatory bodies, and handling data subject requests. While it’s not yet established whether it’s mandatory or not, it is safe to say that this role is vital for embedding a culture of data privacy within the organisation. Technology can also play a significant role in ensuring compliance. Tools such as Unified Endpoint Management (UEM) solutions, encryption technologies, and data loss prevention (DLP) systems can help businesses monitor data flows, detect anomalies, and prevent unauthorized access. 


10 Things To Avoid in Domain-Driven Design (DDD)

To prevent potential issues, it is your responsibility to maintain a domain model that is uncomplicated and accurately reflects the domain. This diligent approach is important to focus on modeling the components of the domain that offer strategic importance and to streamline or exclude less critical elements. Remember, Domain-Driven Design (DDD) is primarily concerned with strategic design and not with needlessly complexifying the domain model with unnecessary intricacies. ... It's crucial to leverage Domain-Driven Design (DDD) to deeply analyze and concentrate on the domain's most vital and influential parts. Identify the aspects that deliver the highest value to the business and ensure that your modeling efforts are closely aligned with the business's overarching priorities and strategic objectives. Actively collaborating with key business stakeholders is essential to gain a comprehensive understanding of what holds the greatest value to them and subsequently prioritize these areas in your modeling endeavors. This approach will optimally reflect the business's critical needs and contribute to the successful realization of strategic goals.


How to Build a Data Governance Program in 90 Days

With a new data-friendly CIO at the helm, Hidalgo was able to assemble the right team for the job and, at the same time, create an environment of maximum engagement with data culture. She assembled discussion teams and even a data book club that read and reviewed the latest data governance literature. In turn, that team assembled its own data governance website as a platform not just for sharing ideas but also to spread the momentum. “We kept the juices flowing, kept the excitement,” Hidalgo recalled. “And then with our data governance office and steering committee, we engaged with all departments, we have people from HR, compliance, legal product, everywhere – to make sure that everyone is represented.” ... After choosing a technology platform in May, Hidalgo began the most arduous part of the process: preparation for a “jumpstart” campaign that would kick off in July. Hidalgo and her team began to catalog existing data one subset of data at a time – 20 KPIs or so – and complete its business glossary terms. Most importantly, Hidalgo had all along been building bridges between Shaw’s IT team, data governance crew, and business leadership to the degree that when the jumpstart was completed – on time – the entire business saw the immense value-add of the data governance that had been built.


Varied Cognitive Training Boosts Learning and Memory

The researchers observed that varied practice, not repetition, primed older adults to learn a new working memory task. Their findings, which appear in the journal Intelligence, propose diverse cognitive training as a promising whetstone for maintaining mental sharpness as we age. “People often think that the best way to get better at something is to simply practice it over and over again, but robust skill learning is actually supported by variation in practice,” said lead investigator Elizabeth A. L. Stine-Morrow ... The researchers narrowed their focus to working memory, or the cognitive ability to hold one thing in mind while doing something else. “We chose working memory because it is a core ability needed to engage with reality and construct knowledge,” Stine-Morrow said. “It underpins language comprehension, reasoning, problem-solving and many sorts of everyday cognition.” Because working memory often declines with aging, Stine-Morrow and her colleagues recruited 90 Champaign-Urbana locals aged 60-87. At the beginning and end of the study, researchers assessed the participants’ working memory by measuring each person’s reading span: their capacity to remember information while reading something unrelated.


Why Cloud Migrations Fail

One stumbling block on the cloud journey is misunderstanding or confusion around the shared responsibility model. This framework delineates the security obligations of cloud service providers, or CSPs, and customers. The model necessitates a clear understanding of end-user obligations and highlights the need for collaboration and diligence. Broad assumptions about the level of security oversight provided by the CSP can lead to security/data breaches that the U.S. National Security Agency (NSA) notes “likely occur more frequently than reported.” It’s also worth noting that 82% of breaches in 2023 involved cloud data. The confusion is often magnified in cases of a cloud “lift-and-shift,” a method where business-as-usual operations, architectures and practices are simply pushed into the cloud without adaptation to their new environment. In these cases, organizations may be slow to implement proper procedures, monitoring and personnel to match the security limitations of their new cloud environment. While the level of embedded security can differ depending on the selected cloud model, the customer must often enact strict security and identity and access management (IAM) controls to secure their environment.


AI - peril or promise?

The interplay between AI data centers and resource usage necessitates innovative approaches to mitigate environmental impacts. Advances in cooling technology, such as liquid immersion cooling and the use of recycled water, offer potential solutions. Furthermore, utilizing recycled or non-potable water for cooling can alleviate the pressure on freshwater resources. Moreover, AI itself can be leveraged to enhance the efficiency of data centers. AI algorithms can optimize energy use by predicting cooling needs, managing workloads more efficiently, and reducing idle times for servers. Predictive maintenance powered by AI can also prevent equipment failures, thereby reducing the need for excessive cooling. This is good news as the sector continues to use AI to benefit from greater efficiencies, cost savings, driving improvements in services with the expected impact of AI on the operational side for data centres expected to be very positive. Over 65 percent of our survey respondents reported that their organizations are regularly using generative AI, nearly double the percentage from their 2023 survey and around 90 percent of respondents expect their data centers to be more efficient as a direct result of AI applications.


HP Chief Architect Recalibrates Expectations Of Practical Quantum Computing’s Arrival From Generations To Within A Decade

Hewlett Packard Labs is now adopting a holistic co-design approach, partnering with other organizations developing various qubits and quantum software. The aim is to simulate quantum systems to solve real-world problems in solid-state physics, exotic condensed matter physics, quantum chemistry, and industrial applications. “What is it like to actually deliver the optimization we’ve been promised with quantum for quite some time, and achieve that on an industrial scale?” Bresniker posed. “That’s really what we’ve been devoting ourselves to—beginning to answer those questions of where and when quantum can make a real impact.” One of the initial challenges the team tackled was modeling benzine, an exotic chemical derived from the benzene ring. “When we initially tackled this problem with our co-design partners, the solution required 100 million qubits for 5,000 years—that’s a lot of time and qubits,” Bresniker told Frontier Enterprise. Considering current quantum capabilities are in the tens or hundreds of qubits, this was an impractical solution. By employing error correction codes and simulation methodologies, the team significantly reduced the computational requirements.


New AI reporting regulations

At its core, the new proposal requires developers and cloud service providers to fulfill reporting requirements aimed at ensuring the safety and cybersecurity resilience of AI technologies. This necessitates the disclosure of detailed information about AI models and the platforms on which they operate. One of the proposal’s key components is cybersecurity. Enterprises must now demonstrate robust security protocols and engage in what’s known as “red-teaming”—simulated attacks designed to identify and address vulnerabilities. This practice is rooted in longstanding cybersecurity practices, but it does introduce new layers of complexity and cost for cloud users. Based on the negative impact of red-teaming on enterprises, I suspect it may be challenged in the courts. The regulation does increase focus on security testing and compliance. The objective is to ensure that AI systems can withstand cyberthreats and protect data. However, this is not cheap. Achieving this result requires investments in advanced security tools and expertise, typically stretching budgets and resources. My “back of the napkin” calculations figure about 10% of the system’s total cost.



Quote for the day:

"Your greatest area of leadership often comes out of your greatest area of pain and weakness." -- Wayde Goodall

Daily Tech Digest - September 13, 2024

AI can change belief in conspiracy theories, study finds

“Our findings fundamentally challenge the view that evidence and arguments are of little use once someone has ‘gone down the rabbit hole’ and come to believe a conspiracy theory,” the team wrote. Crucially, the researchers said, the approach relies on an AI system that can draw on a vast array of information to produce conversations that encourage critical thinking and provide bespoke, fact-based counterarguments. ... “About one in four people who began the experiment believing a conspiracy theory came out the other end without that belief,” said Costello. “In most cases, the AI can only chip away – making people a bit more sceptical and uncertain – but a select few were disabused of their conspiracy entirely.” The researchers added that reducing belief in one conspiracy theory appeared to reduce participants’ belief in other such ideas, at least to a small degree, while the approach could have applications in the real world – for example, AI could reply to posts relating to conspiracy theories on social media. Prof Sander van der Linden of the University of Cambridge, who was not involved in the work, questioned whether people would engage with such AI voluntarily in the real world.


Does Value Stream Management Really Work?

Value stream management is indeed working when it is approached holistically by integrating the framework with technology and people. By mapping and optimizing every step in the customer journey, companies can eliminate waste, create efficiency and ultimately deliver sought after value to customers. The key lies in continuous improvement and stakeholder engagement throughout the value stream, ensuring alignment and commitment to delivering responsiveness and quality to customer needs, according to Saraha Burnett, chief operations officer at full service digital experience and engineering firm TMG. “Value stream management is indeed working when it is approached holistically by integrating the framework with technology and people. By mapping and optimizing every step in the customer journey, companies can eliminate waste, create efficiency and ultimately deliver sought after value to customers,” says Burnett in an email interview. “The key lies in continuous improvement and stakeholder engagement throughout the value stream, ensuring alignment and commitment to delivering responsiveness and quality to customer needs.”


Digital ID hackathons to explore real-world use cases

The hackathons aim to address the cold start program by involving verifiers to facilitate the widespread adoption of mDLs. In this context, the cold start program refers to a marketplace that relies on identity holders and verifiers. The primary focus of the hackathon will be on building minimum viable products (MVPs) that showcase the functionality of the solution. These MVPs will enable participants to test real-world use cases for mDLs. The digital version of California driver’s licenses has a variety of potential uses, according to the OpenID Foundation, including facilitating TSA security checks at airport security checkpoints, verifying age for purchasing age-restricted items, accessing DMV websites online, and using for peer-to-peer identification purposes. For the hackathon, the California DMV will issue mDLs in two formats: the ISO 18013-5 standard and the W3C Verifiable Credentials v1.1 specification. The dual issuance provides verifiers with the flexibility to choose the verification method that best aligns with their system requirements, the foundation says. Christopher Goh, the national harmonization lead for digital identity at Austroads, has written a one-pager discussing the various standards within the ISO/IEC 180130-5 framework specifically related to mDL.


Microsoft VS Code Undermined in Asian Spy Attack

"While the abuse of VSCode is concerning, in our opinion, it is not a vulnerability," Assaf Dahan, director of threat research for Unit 42, clarifies. Instead, he says, "It's a legitimate feature that was abused by threat actors, as often happens with many legitimate software." And there are a number of ways organizations can protect against a bring-your-own-VSCode attack. Besides hunting for indicators of compromise (IoCs), he says, "It's also important to consider whether the organization would want to limit or block the use of VSCode on endpoints of employees that are not developers or do not require the use of this specific app. That can reduce the attack surface." "Lastly, consider limiting access to the VSCode tunnel domains '.tunnels.api.visualstudio[.]com' or '.devtunnels[.]ms' to users with a valid business requirement. Notice that these domains are legitimate and are not malicious, but limiting access to them will prevent the feature from working properly and consequently make it less attractive for threat actors," he adds.


Rather Than Managing Your Time, Consider Managing Your Energy

“Achievement is no longer enough to be successful,” Sunderland says. “People also want to feel happy at the same time. Before, people were concerned only with thinking (mental energy) and doing (physical energy). But that success formula no longer works. Today, it’s essential to add feelings (emotional energy) and inner self-experience (spiritual energy) into the mix for people to learn how to be able to connect to and manage their energy.” ... Sunderland says all forms of human energy exist in relation to one another. “When these energies are in sync with each other, people’s energy will be in flow. People who maintain good health will be able to track those feelings (emotional energy) that flow through their bodies (physical energy), which is an essential skill to help increase energy awareness. With greater levels of energy awareness, people can grow their self-acceptance (emotional energy), which enhances their self-confidence.” He says that as confidence builds, people experience greater clarity of thought (mental energy) and they are able to increase their ability to speak truth (spiritual energy), amplifying their creative energy. 


Mastercard Enhances Real-Time Threat Visibility With Recorded Future Purchase

The payments network has made billions of dollars worth of acquisitions through the years. Within the security solutions segment of Mastercard, key focal points center on examining and protecting digital identities, protecting transactions and using insights from 143 billion annual payments to fashion real-time intelligence that can be used by merchants and FIs to anticipate new threats. By way of example, the firm acquired Ekarta in 2021 to score transactions for the likelihood of fraud through robust identity verification. All told, Mastercard has invested more than $7 billion over the past five years in its efforts to protect the digital economy. Artificial intelligence (AI) is a key ingredient here, and Gerber detailed to PYMNTS that the company has been a pioneer in harnessing generative AI to extract trends from huge swaths of data to create “identity graphs” that provide immediate value to any merchant or FI that wants to understand more about the individuals that’s interacting with them in the digital realm. The use of other “intelligence graphs” connects the dots across data points to turn threat-related data into actionable insights.


2 Open Source AI Tools That Reduce DevOps Friction

DevOps has been built upon taking everything infrastructure and transitioning it to code, aka Infrastructure as Code (IaC). This includes deployment pipelines, monitoring, repositories — anything that is built upon configurations can be represented in code. This is where AI tools like ChatGPT and AIaC come into play. AIaC, an open source command-line interface (CLI) tool, enables developers to generate IaC templates, shell scripts and more, directly from the terminal using natural language prompts. This eliminates the need to manually write and review code, making the process faster and less error-prone. ... The use of AI in DevOps is still in its early stages, but it’s quickly gaining momentum with the introduction of new open source and commercial services. The rapid pace of innovation suggests that AI will soon be embedded in most DevOps tools. From automated code generation with AIaC to advanced diagnostics with K8sGPT, the possibilities seem endless. Firefly is not just observing this revolution — it’s actively contributing to it. By integrating AI into DevOps workflows, teams can work smarter, not harder. 


How to make Infrastructure as Code secure by default

Scanning IaC templates before deployment is undeniably important; it’s an effective way to identify potential security issues early in the development process. It can help prevent security breaches and ensure that your cloud infrastructure aligns with security best practices. If you have IaC scanning tools integrated into your CI/CD pipelines, you can also run automated scans with each code commit or pull request, catching errors early. Post-deployment scans are important because they assess the infrastructure in its operational environment, which may result in finding issues that weren’t identified in dev and test environments. These scans may also identify unexpected dependencies or conflicts between resources. Any manual fixes you make to address these problems will also require you to update your existing IaC templates, otherwise any apps using those templates will be deployed with the same issues baked in. And while identifying these issues in production environments is important to overall security, it can also increase your costs and require your team to apply manual fixes to both the application and the IaC.


New brain-on-a-chip platform to deliver 460x efficiency boost for AI tasks

Despite its novel approach, IISc’s platform is designed to work alongside existing AI hardware, rather than replace it. Neuromorphic accelerators like the one developed by IISc are particularly well-suited for offloading tasks that involve repetitive matrix multiplication — a common operation in AI. “GPUs and TPUs, which are digital, are great for certain tasks, but our platform can take over when it comes to matrix multiplication. This allows for a major speed boost,” explained Goswami. ... As the demand for more advanced AI models increases, existing digital systems are nearing their energy and performance limits. Silicon-based processors, which have driven AI advancements for years, are starting to show diminishing returns in terms of speed and efficiency. “With silicon electronics reaching saturation, designing brain-inspired accelerators that can work alongside silicon chips to deliver faster, more efficient AI is becoming crucial,” Goswami noted. By working with molecular films and analog computing, IISc is offering a new path forward for AI hardware, one that could dramatically cut energy consumption while boosting computational power.


Android Trojans Still Pose a Threat, Researchers Warn

Affected users appear to have been tricked into installing the malware, which doesn't appear to be getting distributed via official Google channels. "Based on our current detections, no apps containing this malware are found on Google Play," a Google spokesperson told Information Security Media Group.* "Android users are automatically protected against known versions of this malware by Google Play Protect, which is on by default on Android devices with Google Play Services," the spokesperson said. "Google Play Protect can warn users or block apps known to exhibit malicious behavior, even when those apps come from sources outside of Play."* Researchers said they first spotted the malware when it was uploaded to analysis site VirusTotal in May from Uzbekistan, in the form of a malicious app made to appear as if it was developed by a "local tax authority." By tracing the IP address to which the malware attempted to "phone home" the researchers found other .apk - Android package - files that showed similar behavior, which they traced to attacks that began by November 2023.



Quote for the day:

"Sometimes it takes a good fall to really know where you stand." -- Hayley Williams

Daily Tech Digest - September 12, 2024

Navigating the digital economy: Innovation, risk, and opportunity

As we move towards the era of Industry 5.0, Digital Economy needs to adopt Human Centred Design (HCD) approach where technology layers revolve around the Human’s as the core. By 2030, it is envisaged to have Organoid Intelligence (OI) to rule the digital economy space with its potential across multi-disciplines with Super Intelligent capabilities. This capability shall democratize digital economy services across sectors in a seamless manner. This rapid technology adoption exposes the system to cyber risks which calls for advanced future security solutions such as Quantum Security embedded with digital currencies such as e-Rupee, crypto-currency, etc. ‘e-rupee’, a virtual equivalent of cash stored in a digital wallet, offers anonymity in payments. ... Indian banks are already piloting blockchain for issuing Letters of Credit, and integrating UPI with blockchain could combine the strengths of both systems, ensuring greater security, ease of use, and instant transactions. Such cyber security threats, also create opportunity for Bit-coin or Crypto-currencies to expand from its current offering towards sectors such as gaming, etc. 


From DevOps to Platform Engineering: Powering Business Success

Platform engineering provides a solution with the tools and frameworks needed to scale software delivery processes, ensuring that organizations can handle increasing workloads without sacrificing quality or speed. It also leads to improved consistency and reliability. By standardizing workflows and automating processes, platform engineering reduces the variability and risk associated with manual interventions. This leads to more consistent and reliable deployments, enhancing the overall stability of applications in production. Further productivity comes from the efficiency it offers developers themselves. Developers are most productive when they can focus on writing code and solving business problems. Platform engineering removes the friction associated with provisioning resources, managing environments, and handling operational tasks, allowing developers to concentrate on what they do best. It also provides the infrastructure and tools needed to experiment, iterate, and deploy new features rapidly, enabling organizations to stay ahead of the curve.


Scaling Databases To Meet Enterprise GenAI Demands

A hybrid approach combines vertical and horizontal scalability, providing flexibility and maximizing resource utilization. Organizations can begin with vertical scaling to enhance the performance of individual nodes and then transition to horizontal scaling as data volumes and processing demands increase. This strategy allows businesses to leverage their existing infrastructure while preparing for future growth — for example, initially upgrading servers to improve performance and then distributing the database across multiple nodes as the application scales. ... Data partitioning and sharding involve dividing large datasets into smaller, more manageable pieces distributed across multiple servers. This approach is particularly beneficial for vector databases, where partitioning data improves query performance and reduces the load on individual nodes. Sharding allows a vector database to handle large-scale data more efficiently by distributing the data across different nodes based on a predefined shard key. This ensures that each node only processes a subset of the data, optimizing performance and scalability.


Safeguarding Expanding Networks: The Role of NDR in Cybersecurity

NDR plays a crucial role in risk management by continuously monitoring the network for any unusual activities or anomalies. This real-time detection allows security teams to catch potential breaches early, often before they can cause serious damage. By tracking lateral movements within the network, NDR helps to contain threats, preventing them from spreading. Plus, it offers deep insights into how an attack occurred, making it easier to respond effectively and reduce the impact. ... When it comes to NDR, key stakeholders who benefit from its implementation include Security Operations Centre (SOC) teams, IT security leaders, and executives responsible for risk management. SOC teams gain comprehensive visibility into network traffic, which reduces false positives and allows them to focus on real threats, ultimately lowering stress and improving their efficiency. IT security leaders benefit from a more robust defence mechanism that ensures complete network coverage, especially in hybrid environments where both managed and unmanaged devices need protection.


Application detection and response is the gap-bridging technology we need

In the shared-responsibility model, not only is there the underlying cloud service provider (CSP) to consider, but there are external SaaS integrations and internal development and platform teams, as well as autonomous teams across the organization often leading to opaque systems with a lack of clarity around where responsibilities begin and end. On top of that, there are considerations around third-party dependencies, components, and vulnerabilities to address. Taking that further, the modern distributed nature of systems creates more opportunities for exploitation and abuse. One example is modern authentication and identity providers, each of which is a potential attack vector over which you have limited visibility due to not owning the underlying infrastructure and logging. Finally, there’s the reality that we’re dealing with an ever-increasing velocity of change. As the industry continues further adoption of DevOps and automation, software delivery cycles continue to accelerate. That trend is only likely to increase with the use of genAI-driven copilots. 


Data Is King. It Is Also Often Unlicensed or Faulty

A report published in the Nature Machine Intelligence journal presents a large-scale audit of dataset licensing and attribution in AI, analyzing over 1,800 datasets used in training AI models on platforms such as Hugging Face. The study revealed widespread miscategorization, with over 70% of datasets omitting licensing information and over 50% containing errors. In 66% of the cases, the licensing category was more permissive than intended by the authors. The report cautions against a "crisis in misattribution and informed use of popular datasets" that is driving recent AI breakthroughs but also raising serious legal risks. "Data that includes private information should be used with care because it is possible that this information will be reproduced in a model output," said Robert Mahari, co-author of the report and JD-PhD at MIT and Harvard Law School. In the vast ocean of data, licensing defines the legal boundaries of how data can be used. ... "The rise in restrictive data licensing has already caused legal battles and will continue to plague AI development with uncertainty," said Shayne Longpre, co-author of the report and research Ph.D. candidate at MIT. 


AI interest is driving mainframe modernization projects

AI and generative AI promise to transform the mainframe environment by delivering insights into complex unstructured data, augmenting human action with advances in speed, efficiency and error reduction, while helping to understand and modernize existing applications. Generative AI also has the potential to illuminate the inner workings of monolithic applications, Kyndryl stated. “Enterprises clearly see the potential with 86% of respondents confirmed they are deploying, or planning to deploy, generative AI tools and applications in their mainframe environments, while 71% say that they are already implementing generative AI-driven insights as part of their mainframe modernization strategy,” Kyndryl stated. ... While AI will likely shape the future for mainframes, a familiar subject remains a key driver for mainframe investments: security. “Given the ongoing threat from cyberattacks, increasing regulatory pressures, and an uptick in exposure to IT risk, security remains a key focus for respondents this year with almost half (49%) of the survey respondents cited security as the number one driver of their mainframe modernization investments in the year ahead,” Kyndryl stated.


How AI Is Propelling Data Visualization Techniques

AI has improved data processing and cleaning. AI identifies missing data and inconsistencies, which means we end up with more reliable datasets for effective visualization. Personalization is yet another benefit AI has brought. AI-powered tools can tailor visualizations based on set goals, context, and preferences. For example, a user can provide their business requirements, and AI will provide a customized chart and information layout based on these requirements. This saves time and can also be helpful when creativity isn’t flowing as well as we’d like. ... It’s useful for geographic data visualization in particular. While traditional maps provide a top-down perspective, AR mapping systems use existing mapping technologies, such as GPS, satellite images, and 3D models, and combine them with real-time data. For example, Google’s Lens in Maps feature uses AI and AR to help users navigate their surroundings by lifting their phones and getting instant feedback about the nearest points of interest. Business users will appreciate how AI automates insights with natural language generation (NGL). 


Framing The Role Of The Board Around Cybersecurity Is No Longer About Risk

Having set an unequivocal level of accountability with one executive for cybersecurity, the Board may want to revisit the history of the firm with regards to cyber protection, to ensure that mistakes are not repeated, that funding is sufficient and overall, that the right timeframes are set and respected, in particular over the mid to long-term horizon if large scale transformative efforts are required around cybersecurity. We start to see a list of topics emerging, broadly matching my earlier pieces, around the “key questions the Board should ask”, but more than ever, executive accountability is key in the face of current threats to start building up a meaningful and powerful top-down dialogue around cybersecurity. Readers may notice that I have not used the word “risk” even once in this article. Ultimately, risk is about things that may or may not happen: In the face of the “when-not-if” paradigm around cyber threats – and increasingly other threats as well – it is essential for the Board to frame and own business protection as a topic rooted in the reality of the world we live in, not some hypothetical matter which could be somehow mitigated, transferred or accepted.


Embracing First-Party Data in a Cookie-Alternative World

Unfortunately, the transition away from third-party cookies presents significant challenges that extend beyond shifting customer interactions. Many businesses are particularly concerned about the implications for data security and privacy. When looking into alternative data sources, businesses may inadvertently expose themselves to increased security risks. The shift to first-party data collection methods requires careful evaluation and implementation of advanced security measures to protect against data breaches and fraud. It is also crucial to ensure the transition is secure and compliant with evolving data privacy regulations. To ensure the data is secure, businesses should go beyond standard encryption practices and adopt advanced security measures such as tokenization for sensitive data fields, which minimizes the risk of exposing real data in the event of a breach. Additionally, regular security audits are crucial. Organizations should leverage automated tools for continuous security monitoring and compliance checks that can provide real-time alerts on suspicious activities, helping to preempt potential security incidents. 



Quote for the day:

“It's not about who wants you. It's about who values and respects you.” -- Unknown

Daily Tech Digest - September 11, 2024

Unlocking the Quantum Internet: Germany’s Latest Experiment Sets Global Benchmarks

“Comparative analysis with existing QKD systems involving SPS reveals that the SKR achieved in this work goes beyond all current SPS-based implementations. Even without further optimization of the source and setup performance, it approaches the levels attained by established decoy state QKD protocols based on weak coherent pulses.” The first author of the work, Dr. Jingzhong Yang remarked. The researchers speculate that QDs also offer great prospects for the realization of other quantum internet applications, such as quantum repeaters, and distributed quantum sensing, as they allow for inherent storage of quantum information and can emit photonic cluster states. The outcome of this work underscores the viability of seamlessly integrating semiconductor single-photon sources into realistic, large-scale, and high-capacity quantum communication networks. The need for secure communication is as old as humanity itself. Quantum communication uses the quantum characteristics of light to ensure that messages cannot be intercepted. “Quantum dot devices emit single photons, which we control and send to Braunschweig for measurement. This process is fundamental to quantum key distribution,” Ding said.


How AI Impacts Sustainability Opportunities and Risks

While AI can be applied to sustainability challenges, there are also questions around the sustainability of AI itself given technology’s impact on the environment. “We know that many companies are already dealing with the ramifications of increased energy usage and water usage as they're building out their AI models,” says Shim. ... As the AI market goes through its growing pains, chips are likely to become more efficient and use cases for the technology will become more targeted. But predicting the timeline for that potential future or simply waiting for it to happen is not the answer for enterprises that want to manage opportunities and risks around AI and sustainability now. Rather than getting caught up in “paralysis by analysis,” enterprise leaders can take action today that will help to actually build a more sustainable future for AI. With AI having both positive and negative impacts on the environment, enterprise leaders who wield it with targeted purpose are more likely to guide their organizations to sustainable outcomes. Throwing AI at every possible use case and seeing what sticks is more likely to tip the scales toward a net negative environmental impact. 


Agentic AI: A deep dive into the future of automation

Agentic AI combines classical automation with the power of modern large language models (LLMs), using the latter to simulate human decision-making, analysis and creative content. The idea of automated systems that can act is not new, and even a classical thermostat that can turn the heat and AC on and off when it gets too cold or hot is a simple kind of “smart” automation. In the modern era, IT automation has been revolutionized by self-monitoring, self-healing and auto-scaling technologies like Docker, Kubernetes and Terraform which encapsulate the principles of cybernetic self-regulation, a kind of agentic intelligence. These systems vastly simplify the work of IT operations, allowing an operator to declare (in code) the desired end-state of a system and then automatically align reality with desire—rather than the operator having to perform a long sequence of commands to make changes and check results. However powerful, this kind of classical automation still requires expert engineers to configure and operate the tools using code. Engineers must foresee possible situations and write scripts to capture logic and API calls that would be required. 


How to Make Technical Debt Your Friend

When a team identifies that they are incurring technical debt, they are basing that assessment on their theoretical ideal for the architecture of the system, but that ideal is just their belief based on assumptions that the system will be successful. The MVP may be successful, but in most cases its success is only partial - that is the whole point of releasing MVPs: to learn things that can be understood in no other way. As a result, assumptions about the MVA that the team needs to build also tend to be at least partially wrong. The team may think that they need to scale to a large number of users or support large volumes of data, but if the MVP is not overwhelmingly appealing to customers, these needs may be a long way off, if they are needed at all. For example, the team may decide to use synchronous communications between components to rapidly deliver an MVP, knowing that an asynchronous model would offer better scalability. However, the switch between synchronous and asynchronous models may never be necessary since scalability may not turn out to be an issue.


What CIOs should consider before pursuing CEO ambitions

The trend is encouraging, but it’s important to temper expectations. While CIOs have stepped up and delivered digital strategies for business transformation, using those successes as a platform to move into a CEO position could throw a curveball. Jon Grainger, CTO at legal firm DWF, says one key challenge is industrial constraints. “You’ve got to remember that, in a sector like professional services, there are things you’re going to be famous for,” he says. “DWF is famous for providing amazing legal services. And to do that, the bottom line is you’ve got to be a lawyer — and that’s not been my path.” He says CIOs can become CEOs, but only in the right environment. “If the question was rephrased to, ‘Jon, could you see yourself as a CEO?,’ then I would say, ‘Yes, absolutely.’ But I would say I’m unlikely to become the CEO of a legal services company because, ultimately, you’ve got to have the right skill set.” Another challenge is the scale of the transition. Compared to the longevity of other C-suite positions, technology leadership is an executive fledgling. Many CIOs — and their digital leadership peers, such as chief data or digital officers — are focused squarely on asserting their role in the business.


Immediate threats or long-term security? Deciding where to focus is the modern CISO’s dilemma

CISOs need to balance their budgets between immediate threat responses and long-term investments in cybersecurity infrastructure, says Eric O’Neill, national security strategist at NeXasure and a former FBI operative who helped capture former FBI special agent Robert Hanssen, the most notorious spy in US history. While immediate threats require attention, CISOs should allocate part of their budgets to long-term planning measures, such as implementing multi-factor authentication and phased infrastructure upgrades, he says. “This balance often involves hiring incident response partners on retainer to handle breaches, thereby allowing internal teams to focus on prevention and detection,” O’Neill says. “By planning phased rollouts for larger projects, CISOs can spread costs over time while still addressing immediate vulnerabilities.” Clare Mohr, US cyber intelligence lead at Deloitte, says a common approach is for CISOs to allocate 60 to 70% of their budgets to immediate threat response and the remainder to long-term initiatives –although this varies from company to company. “This distribution should be flexible and reviewed annually based on evolving threats,” she says. 


Would you let an AI robot handle 90% of your meetings?

“Let’s assume, fast-forward five or six years, that AI is ready. AI probably can help for maybe 90 per cent of the work,” he said. “You do not need to spend so much time [in meetings]. You do not have to have five or six Zoom calls every day. You can leverage the AI to do that.” Even more interestingly, Yuan alluded to your digital clone potentially being programmed to be better equipped to deal with areas you don’t feel confident in, for example, negotiating a deal during a sales call. “Sometimes I know I’m not good at negotiations. Sometimes I don’t join a sales call with customers,” he explained. “I know my weakness before sending a digital version of myself. I know that weakness. I can modify the parameter a little bit.” ... According to Microsoft’s 2024 Work Trend Index, 75 per cent of knowledge workers use AI at work every day. This is despite 46 per cent of those users not using it less than six months ago. ... However, leaders are lagging behind when it comes to incorporating AI productivity tools – 59 per cent worry about quantifying the productivity gains of AI and as a result, 78 per cent of AI users are bringing their own AI tools to work and 52 per cent who use AI at work are reluctant to admit to it for fear it makes them look replaceable.


Understanding the Importance of Data Resilience

Understanding an organization’s current level of data resilience is crucial for identifying areas that need improvement. Key indicators of data resilience include the Recovery Point Objective (RPO), which refers to the maximum acceptable amount of data loss measured in time. A lower RPO signifies a higher level of data resilience, as it minimizes the amount of data at risk during an incident. The Recovery Time Objective (RTO) is the target time for recovering IT and business activities after a disruption. A shorter RTO indicates a more resilient data strategy, as it enables quicker restoration of operations. Data integrity involves maintaining the accuracy and consistency of data over its lifecycle, implementing measures to prevent data corruption, unauthorized access, and accidental deletions. System redundancy, which includes having multiple data centers, failover systems, and cloud-based backups, ensures continuous data availability by providing redundant systems and infrastructure. Building sustainable data resilience requires a long-term commitment to continuous improvement and adaptation. 


Examining Capabilities-Driven AI

Organizations often respond to trends in technology by developing centralized organizations to adopt the underlying technologies associated with a trend. The industry has decades of experience demonstrating that centralized approaches to adopting technology result in large, centralized cost pools that generate little business value. Since the past is often a good predictor of the future, we expect that many companies will attempt to adopt AI by creating centralized organizations or “centers of excellence,” only to burn millions of dollars without generating significant business value. AI-enablement is much easier to accomplish within a capability than across an entire organization. Organizations can evaluate areas of weakness within a business capability, identify ways to either improve the customer experience and/or reduce the cost to serve, and target improvement levels. Once the improvement is quantified into an economic value, this value can be used to bound the build and operate cost of AI-enhanced capability. Benefit and cost parameters are important because knowledge engineering is often the largest cost associated with an AI-enabled business process. 


SOAR Is Dead, Long Live SOAR

While the core use case for SOAR remains strong, the combination of artificial intelligence, automation, and the current plethora of cybersecurity products will result in a platform that could take market share from SOAR systems, such as an AI-enabled next-generation SIEM, says Eric Parizo, managing principal analyst at Omdia. "SOC decision-makers are [not] going out looking to purchase orchestration and automation as much as they're looking to solve the problem of fostering a faster, more efficient TDIR [threat detection, investigation, and response] life cycle with better, more consistent outcomes," he says. "The orchestration and automation capabilities within standalone SOAR solutions are intended to facilitate those business objectives." AI and machine learning will continue to increasingly augment automation, says Sumo Logic's Clawson. While creating AI security agents that process data and automatically respond to threats is still in its infancy, the industry is clearly moving in that direction, especially as more infrastructure uses an "as-code" approach, such as infrastructure-as-code, he says. The result could be an approach that reduces the need for SOAR.



Quote for the day:

"Kind words do not cost much. Yet they accomplish much." -- Blaise Pascal