Daily Tech Digest - November 13, 2024

In response to current digital transformation demands, organizations are integrating emerging technologies at an unprecedented rate. Despite their numerous benefits, securing these technologies is challenging for technology leaders. The white paper identified more than 200 critical and emerging technologies reshaping the digital ecosystem. Beyond AI and IoT, technologies such as blockchain, biotechnology and quantum computing are rising on the hype cycle, introducing new cybersecurity risks. ... Quantum computing, while promising breakthrough computational power, presents grave cybersecurity risks. It threatens to break current encryption standards, and quantum computers can potentially decrypt data collected now for future access. "The threat of quantum computing underscores the need for quantum-resistant cryptographic solutions to secure our digital future," the white paper stated. ... The cybersecurity industry faces a critical shortage of skilled professionals capable of managing emerging technology security. Cybersecurity Ventures projected a shortfall of 3.5 million cybersecurity professionals by 2025. Gartner predicted this skills gap would cause more than 50% of significant incidents by 2025. 


Do You Need a Solution or Enterprise Architect?

A Solution Architect is more like a surgeon who operates on someone to fix a problem, and the patient returns to normal life in a short time. An Enterprise Architect is more like an internal medicine specialist who treats a patient with a chronic illness over a number of years to improve the person’s quality of life. ... Architects are most successful when they help projects to succeed. Commonality of process and technology can be beneficial for an organization. But once architects are merely policing projects and rejecting aspects based on strict criteria, they lose the ability to positively influence the initiatives. Solution alignment is best achieved through working collaboratively with projects early to convince them of the advantages of various design choices. The first deliverable many architecture teams produce is what I call the “red/yellow/green list”. You’ve all seen these. Each technology classification is listed down the page – for example: server type, operating system, network software, database technology, and programming language. Three “colour” columns follow across the page. “Red” items are forbidden to be used by new projects. Although some legacy applications may still use them, they need to be phased out. “Yellow” items can be used under certain circumstances, but must be pre-approved by some kind of review committee. 


DataRobot launches Enterprise AI Suite to bridge gap between AI development and business value

The agentic AI approach is designed to help organizations handle complex business queries and workflows. The system employs specialist agents that work together to solve multi-faceted business problems. This approach is particularly valuable for organizations dealing with complex data environments and multiple business systems. “You ask a question to your agentic workflow, it breaks up the questions into a set of more specific questions, and then it routes them to agents which are specialists in various different areas,” Saha explained. For instance, a business analyst’s question about revenue might be routed to multiple specialized agents – one handling SQL queries, another using Python – before combining results into a comprehensive response. ... “We have put together a lot of instrumentation which lets people visually understand, for example, if you have a lot of clustering of data in the vector database, you can get a spurious answer,” Saha said. “You would be able to see that, if you see your questions are landing in areas where you don’t have enough information.” This observability extends to the platform’s governance capabilities, with real-time monitoring and intervention features. 


Using AI for DevOps: What Developers and Ops Need To Know

“AI can be incredibly powerful in DevOps when it’s implemented with a clear framework that makes it easy for developers to do the right thing and hard for them to do the wrong thing,” says Durkin. “Making it easy to do the right thing starts with standardizing templates and policies to streamline workflows. Create templates and enforce policies that support easy, repeatable integration of AI tools. By establishing policies that automate security and compliance checks, AI tools can operate within these boundaries, providing valuable support without compromising standards. This approach simplifies adoption and makes it harder to skip essential steps, reinforcing best practices across teams.” ... While having a well-considered strategy in place before embracing AI and DevOps is a must, Durkin and Govrin both offered up some additional tips and advice for getting AI tools and technologies to integrate with DevOps ambitions more easily. “In enterprise environments, deploying AI applications locally can significantly improve adoption and integration,” said Govrin. “Unlike consumer apps, enterprise AI benefits greatly from self-hosted setups, where solutions like local inference, support for self-hosted models and edge inferencing play a key role. These methods keep data secure and mitigate risks associated with data transfer across public clouds.”


The CISO paradox: With great responsibility comes little or no power

The absence of command makes cybersecurity decision-making a tedious and often frustrating process for CISOs. They are expected to move fast, to anticipate and address security issues before they become realized. But without command, they’re stuck in a cycle of “selling” the importance of security investments, waiting for approvals, and relying on others to prioritize those investments. This constant need for buy-in slows down response times and creates opportunities for something bad to happen. In cybersecurity, where timing is everything, these delays can be costly. Beyond timing, the concept of command is critical for strategic alignment and empowerment. In organizations where the CISO lacks true command, they’re forced to operate reactively rather than proactively. ... If organizations want to truly protect themselves, they need to recognize that CISOs require true command. The most effective CISOs are those who can operate with full authority over their domain, free from constant internal roadblocks. As companies consider how best to secure their data, they should ask themselves whether they are genuinely setting their CISOs up for success. Are they empowering them with the resources, authority, and autonomy to act? Or are they merely assigning a high-stakes responsibility without the power to fulfill it?


Harnessing SaaS to elevate your digital transformation journey

While SaaS provides the infrastructure, AI is the catalyst that powers digital transformation at scale. Companies are increasingly adopting AI-driven SaaS platforms to streamline workflows, automate tasks, and make data-driven decisions. In the B2B SaaS sector, this combination is revolutionising how businesses operate, helping them personalize customer interactions, predict outcomes, and optimize operations. ... In manufacturing, AI optimizes supply chain management, reducing waste and increasing productivity. In the finance sector, AI-driven SaaS automates risk assessment, improving decision-making and reducing operational costs. The benefits of adopting AI and SaaS are clear: enhanced customer experience, streamlined operations, and the ability to innovate faster than ever before. Companies that fail to integrate these technologies risk falling behind as competitors capitalize on these advancements to deliver superior products and services. As businesses continue to adopt SaaS and AI-driven solutions, the future of digital transformation looks promising. Companies are no longer just thinking about automating processes or improving efficiency, they are investing in technologies that will help them shape the future of their industries. 


Tackling ransomware without banning ransom payments

Despite these somewhat muddied waters, the correct response to ransomware attacks is clear: paying demands should almost always be a last resort. The only exception should be where there is a risk to life. Paying because it’s easy, costs less and causes less disruption to the business is not a good enough reason to pay, regardless of whether it’s the business handing cashing out or an insurer. However, while a step in the right direction, totally banning ransom payments addresses only one form of attack and feels a bit like a ‘whack-a-mole’ strategy. It may ease the rise in attacks for a short while, but attackers will inevitably switch tactics, to compromising business email perhaps, or something we’ve not even heard of yet. So, what else can be done to slow the rise in ransomware attacks? Well, we can consider a few options, such as closing vulnerability trading brokers and regulating cryptocurrency transactions. To pick on the latter as an example, most cybercrime monetizes through cryptocurrency, so rather than simply banning payments, it could be a better option to regulate the crypto industry and flow of money. Alongside this kind of regulatory change, governments could also consider moving the decision of whether to pay or not to an independent body. 


CISOs in 2025: Balancing security, compliance, and accountability

The scope of the CISO role has expanded significantly over the past 10-15 years, and has moved from mainly technical oversight to strategic leadership, risk management, and regulatory compliance. The constant pressure to prevent breaches and manage incidents can lead to high stress and burnout, making the role less appealing. This also means that modern CISOs must possess a blend of technical expertise, strategic thinking, and strong interpersonal skills. The requirement for such a diverse skill set can limit the pool of qualified candidates, as not all cybersecurity professionals have the necessary combination of skills. ... CISOs will need to be able to effectively communicate complex cybersecurity issues to non-technical board members and executives. This involves translating technical jargon into business language, and clearly articulating the impact of cybersecurity risks on the organization’s overall business strategy. And as cybersecurity becomes integral to business strategy, CISOs must be able to think beyond immediate threats, and focus on long-term strategic planning. This includes understanding how cybersecurity initiatives align with business goals and contribute to competitive advantage.


Emergence of Preemptive Cyber Defense: The Key to Defusing Sophisticated Attacks

The frequency of attacks is only part of the problem. Perhaps the biggest concern is the sophistication of incidents. Right now, cybercriminals are using everything from AI and machine learning to polymorphic malware coupled with sophisticated psychological tactics that play off of breaking world events and geopolitical tension. ... The clear limitations of these reactive systems have many businesses looking to shift away from the “one-size-fits-all” approach to more dynamic options. ... With redundancy, security, and resiliency in mind, many companies are following the lead of government agencies and diversifying their cybersecurity investments across multiple providers. This includes the option of a preemptive cyber defense solution, which, rather than relying on a single offering, blends in three — a triad that addresses the complexities of modern cybersecurity challenges. ... The preemptive cyber defense triad offers businesses the ultimate protection—a security ecosystem where the attack surface is constantly changing (AMTD), the security controls are always optimized (ASCA), and the overall threat exposure is continuously managed and minimized (CTEM).


Insurance Firm Introduces Liability Coverage for CISOs

“CISOs are the front line of defense against cyber threats, yet their role may leave them exposed to personal liabilities – particularly in light of the Securities and Exchange Commission’s (SEC) new cyber disclosure rules,” Nick Economidis, senior vice president of eRisk at Crum and Forster, said in a statement. “Our CISO Professional Liability Insurance is designed to bridge that gap, providing an essential safety net by offering CISOs the protection they need to perform their jobs with confidence.” ... The new insurance program by the Morristown, New Jersey-based law firm comes in the wake of charges against software maker SolarWinds and its CISO, Tim Brown, being dismissed by a federal court judge. The charges were made in connection with the massive software supply chain attack in 2020 by a threat group supported by Russia’s foreign intelligence services. ... “As personal liability risks for CISOs continue to evolve, the availability and scope of D&O insurance will remain a critical factor in recruiting and retaining top cybersecurity talent,” Fehling wrote. “Companies that offer robust insurance protection may gain a competitive advantage in the tight market for skilled security leaders.” 



Quote for the day:

"If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work." -- Thomas J. Watson

Daily Tech Digest - November 12, 2024

Researchers Focus In On ‘Lightcone Bound’ To Develop An Efficiency Benchmark For Quantum Computers

The researchers formulated this bound by first reinterpreting the quantum circuit mapping challenge through quantum information theory. They focused on the SWAP “uncomplexity,” the lowest number of SWAP operations needed, which they determined using graph theory and information geometry. By representing qubit interactions as density matrices, they applied concepts from network science to simplify circuit interactions. To establish the bound, in an interesting twist, the team employed a Penrose diagram — a tool from theoretical physics typically used to depict spacetime geometries — to visualize the paths required for minimal SWAP-gate application. They then compared their model against a brute-force method and IBM’s Qiskit compiler, with consistent results affirming that their bound offers a practical minimum SWAP requirement for near-term quantum circuits. The researchers acknowledge the lightcone model has some limitations that could be the focus of future work. For example, it assumes ideal conditions, such as a noiseless processor and indefinite parallelization, conditions not yet achievable with current quantum technology. The model also does not account for single-qubit gate interactions, focusing only on two-qubit operations, which limits its direct applicability for certain quantum circuits.


Evaluating your organization’s application risk management journey

One way CISOs can articulate application risk in financial terms is by linking security improvement efforts to measurable outcomes, like cost savings and reduced risk exposure. This means quantifying the potential financial fallout from security incidents and showing how preventative measures mitigate these costs. CISOs need to equip their teams with tools that will help them protect their business in the short and long term. A study we commissioned with Forrester found that putting application security measures in place could save average organization millions in terms of avoided breach costs. ... To keep application risk management a dynamic, continuous process, CISOs integrate security into every stage of software development. Instead of relying on periodic assessments, organisations should implement real-time risk analysis, continuous monitoring, and feedback mechanisms to enable teams to address vulnerabilities promptly as they arise, rather than waiting for scheduled evaluations. Incorporating automation can also play a key role in streamlining this process, enabling quicker remediation of identified risks. Building on this, creating a security-first mindset across the organisation – through training and clear communication – ensures risk management adapts to new threats, supporting both innovation and compliance.


How a Second Trump Presidency Could Shape the Data Center Industry

“We anticipate that the incoming administration will have a keen focus on AI and our nation’s ability to be the global leader in the space,” Andy Cvengros, managing director, co-lead of US data center markets for JLL, told Data Center Knowledge. He said to do that, the industry will need to solve the transmission delivery crisis and continue to increase generation capacity rapidly. This may include reactivating decommissioned coal and nuclear power plants, as well as commissioning more of them. “We also anticipate that state and federal governments will become much more active in enabling the utilities to proactively expand substations, procure long lead items and support key submarket expansion through planned developments,” Cvengros said. ... Despite the federal government’s likely hands-off approach, Harvey said he believes large corporations might support consistent, global standards – especially since European regulations are far stricter. “US companies would prefer a unified regulatory framework to avoid navigating a complex patchwork of rules across different regions,” he said. Still, Europe’s stronger regulatory stance on renewable power might lead some companies to prioritize US-based expansions, where subsidies and fewer regulations make operations more economically feasible.


Data Breaches are a Dime a Dozen: It’s Time for a New Cybersecurity Paradigm

The modern-day ‘stack’ includes many disparate technology layers—from physical and virtual servers to containers, Kubernetes clusters, DevOps dashboards, IoT, mobile platforms, cloud provider accounts, and, more recently, large language models for GenAI. This has created the perfect storm for threat actors, who are targeting the access and identity silos that significantly broaden the attack surface. The sheer volume of weekly breaches reported in the press underscores the importance of protecting the whole stack with Zero Trust principles. Too often, we see bad actors exploiting some long-lived, stale privilege that allows them to persist on a network and pivot to the part of a company’s infrastructure that houses the most sensitive data. ... Zero Trust access for modern infrastructure benefits from being coupled with a unified access mechanism that acts as a front-end to all the disparate infrastructure access protocols – a single control point for authentication and authorization. This provides visibility, auditing, enforcement of policies, and compliance with regulations, all in one place. These solutions already exist on the market, deployed by security-minded organizations. However, adoption is still in early days. 


AI’s math problem: FrontierMath benchmark shows how far technology still has to go

Mathematics, especially at the research level, is a unique domain for testing AI. Unlike natural language or image recognition, math requires precise, logical thinking, often over many steps. Each step in a proof or solution builds on the one before it, meaning that a single error can render the entire solution incorrect. “Mathematics offers a uniquely suitable sandbox for evaluating complex reasoning,” Epoch AI posted on X.com. “It requires creativity and extended chains of precise logic—often involving intricate proofs—that must be meticulously planned and executed, yet allows for objective verification of results.” This makes math an ideal testbed for AI’s reasoning capabilities. It’s not enough for the system to generate an answer—it has to understand the structure of the problem and navigate through multiple layers of logic to arrive at the correct solution. And unlike other domains, where evaluation can be subjective or noisy, math provides a clean, verifiable standard: either the problem is solved or it isn’t. But even with access to tools like Python, which allows AI models to write and run code to test hypotheses and verify intermediate results, the top models are still falling short.


Can Wasm replace containers?

One area where Wasm shines is edge computing. Here, Wasm’s lightweight, sandboxed nature makes it especially intriguing. “We need software isolation on the edge, but containers consume too many resources,” says Michael J. Yuan, founder of Second State and the Cloud Native Computing Foundation’s WasmEdge project. “Wasm can be used to isolate and manage software where containers are ‘too heavy.’” Whereas containers take up megabytes or gigabytes, Wasm modules take mere kilobytes or megabytes. Compared to containers, a .wasm file is smaller and agnostic to the runtime, notes Bailey Hayes, CTO of Cosmonic. “Wasm’s portability allows workloads to run across heterogeneous environments, such as cloud, edge, or even resource-constrained devices.” ... Wasm has a clear role in performance-critical workloads, including serverless functions and certain AI applications. “There are definitive applications where Wasm will be the first choice or be chosen over containers,” says Luke Wagner, distinguished engineer at Fastly, who notes that Wasm brings cost-savings and cold-start improvements to serverless-style workloads. “Wasm will be attractive for enterprises that don’t want to be locked into the current set of proprietary serverless offerings.”


Authentication Actions Boost Security and Customer Experience

Authentication actions can be used as effective tools for addressing the complex access scenarios organizations must manage and secure. They can be added to workflows to implement convenience and security measures after users have successfully proven their identity during the login process. ... When using authentication actions, first take some time to fully map out the customer journey you want to achieve, and most importantly, all of the possible variations of this journey. Think of your authentication requirements as a flowchart that you control. Start by mapping out your requirements for different users and how you want them to sign up and authenticate. Understand the trade-off between security and user experience. Consider using actions to enable a frictionless initial login with a simple authentication method. You can use step-up authentication as a technique that increases the level of assurance when the user needs to perform higher-privilege operations. You can also use actions to implement dynamic behavior per user. For instance, you can use an action that captures an identifier like an email to identify the user. Then you can use another action to look up the user’s preferred authentication method or methods to give each user a personalized experience.


How Businesses use Modern Development Platforms to Streamline Automation

APIs are essential for streamlining data flows between different systems. They enable various software applications to communicate with each other, automating data exchange and reducing manual input. For instance, integrating an API between a customer relationship management (CRM) system and an email marketing platform can automatically sync contact information and campaign data. This not only saves time, but also minimizes errors that can occur with manual data entry. ... Workflow automation tools are designed to streamline business processes by automating repetitive steps and ensuring smooth transitions between tasks. These tools help businesses design and manage workflows, automate task assignments, and monitor progress. For example, tools like Asana and Monday.com allow teams to automate task notifications, approvals, and status updates. By automating these processes, businesses can improve collaboration and reduce the risk of missed deadlines or overlooked tasks. Workflow automation tools also provide valuable insights into process performance, enabling companies to identify bottlenecks and optimize their operations. This leads to more efficient workflows and better resource management.

“Micromanagement is one of the fastest ways to destroy IT culture,” says Jay Ferro, EVP and chief information, technology, and product officer at Clario. “When CIOs don’t trust their teams to make decisions or constantly hover over every detail, it stifles creativity and innovation. High-performing professionals crave autonomy; if they feel suffocated by micromanagement, they’ll either disengage or leave for an environment where they’re empowered to do their best work.” ... One of the most challenging issues facing transformational CIOs is the overwhelming demand to take on more initiatives, deliver to greater scope, or accept challenging deadlines. Overcommitting to what IT can reasonably accomplish is an issue, but what kills IT culture is when the CIO leaves program leaders defenseless when stakeholders are frustrated or when executive detractors roadblock progress. “It demoralizes IT when there is a lack of direction, no IT strategy, and the CIO says yes to everything the business asks for regardless of whether the IT team has the capacity,” says Martin Davis, managing partner at Dunelm Associates. “But it totally kills IT culture when the CIO doesn’t shield teams from angry or disappointed business senior management and stakeholders.”


Understanding Data Governance Maturity: An In-Depth Exploration

Maturity in data governance is typically assessed through various models that measure different aspects of data management such as data quality and compliance and examines processes for managing data’s context (metadata) and its security. Maturity models provide a structured way to evaluate where an organization stands and how it can improve for a given function. ... Many maturity models are complex and may require significant time and resources to implement. Organizations need to ensure they have the capacity to effectively handle the complexity involved in using these models. Additionally, some data governance maturity models do not address the relevant related data management functions, such as metadata management, data quality management, or data security to a sufficient level of detail for some organizations. ... Implementing changes based on maturity model assessments can face resistance; organizational culture may not accept the views discovered in an assessment. Adopting and sustaining effective change management strategies and choosing a maturity model carefully can help overcome resistance and ensure successful implementation.



Quote for the day:

"Whenever you see a successful person, you only see the public glories, never the private sacrifices to reach them." -- Vaibhav Shah

Daily Tech Digest - November 11, 2024

What if robots learned the same way genAI chatbots do?

To unlock further potential in robotic learning, training objectives beyond supervised learning, such as self-supervised or unsupervised learning, should be investigated. It is important to grow the datasets with diverse, high-quality data. This could include teleoperation data, simulations, human videos, and deployed robot data. Researchers need to learn the optimal blend of data types for higher HPT success rates. Researchers and later industry will need to create standardized virtual testing grounds to facilitate the comparison of different robot models. ... Think of it as giving robots more demanding, more realistic challenges to solve. Scientists are also looking into how the amount of data, the size of the robot’s “brain” (model), and its performance are connected. Understanding this relationship could help us build better robots more efficiently. Another exciting area is teaching robots to understand different types of information. This could include 3D maps of their surroundings, touch sensors, and even data from human actions. By combining all these different inputs, robots could learn to understand their environment more like humans do. All these research ideas aim to create smarter, more versatile robots that can handle a wider range of tasks in the real world. 


Chief AI Officers: Should Every Business Have One?

The CAIO's mandate extends beyond technical oversight. These leaders define their company's AI vision, bring solutions to market and establish ethical governance frameworks, Laqab said. Their systems address data privacy protection and eliminate bias in AI implementations while aligning with organizational objectives. At Ascendion, every data scientist and machine learning engineer develops solutions under stringent guidelines, which Laqab describes as "prioritizing rigorous planning and transparency" - an approach critical in regulated sectors such as healthcare and finance where trustworthy AI proves essential. ... CAIOs work in strategic partnership with CIOs and CTOs to integrate AI capabilities into organizational systems. Laqab underscored the importance of unified planning where AI leaders and technology teams implement initiatives without duplicating or straining resources. This approach builds cross-functional momentum, enabling CAIOs to embed AI in broader processes while maximizing existing IT investments and supporting overall strategy. ... "Just as the CDO emerged to manage data, the CAIO is essential for navigating the complex landscape of AI technologies. 


New research reveals AI adoption on rise, but challenges remain in data governance & ROI realisation

Commenting on the survey, Noshin Kagalwalla, Vice President & Managing Director, SAS India, said: “Indian companies are undoubtedly making progress in AI adoption, but significant work remains. The challenge lies not only in deploying AI but also in a way that it is trustworthy, scalable, and aligned with long-term business objectives. Strategic investments in data governance and AI infrastructure will be crucial to driving sustainable AI performance across industries in India.” “The disparity in target outcomes between AI Leaders and AI Followers demonstrates a lack of clear strategy and roadmap. Where AI Followers are focused on short-term, productivity-based results, AI Leaders have moved beyond these to more complex functional and industry use cases,” said Shukri Dabaghi, Senior Vice President, Asia Pacific and EMEA Emerging at SAS. “As businesses look to capitalise on the transformative potential of AI, it’s important for business leaders to learn from the differences between an AI Leader and an AI Follower. Avoiding a ‘gold rush’ way of thinking ensures long-term transformation is built on trustworthy AI and capabilities in data, processes and skills,” said Mr. Dabaghi.


4 reasons why veterans thrive as cybersecurity professionals

Through their military experience, veterans learn to combat the most sophisticated adversaries in existence and adopt an apex attacker’s perspective. Many of today’s malicious actors are not lone individuals wearing a hoodie and operating from a cybercafe; they’re highly skilled, well-funded nation-state actors or part of a larger cybercrime group that operates like a corporate organization. Dealing with such high-level adversaries requires defenders who are trained specifically to combat their techniques. Many veterans are trained extensively in red team attack simulations, in which they pose as an attacker and attempt to breach an organization’s systems to assess vulnerabilities and boost the organization’s security posture. This training is used to combat nation-state attackers, with military members engaging in monthly or multi-year attack simulations. ... Maintaining security requires a distinct mentality where your approach meets the dedication of the threat actor trying to hack into your system. Veterans can become skilled in specialized areas like hunting for advanced adversaries within security systems. They know that adversaries can be relentless in their attempts and can be adept in providing relentless defense.


Responsible AI starts with transparency

Today, most foundational AI models have been trained on data scrubbed from the public internet, so it’s essentially impossible for users to understand the dataset at web scale. Even the model providers themselves aren’t always able to fully understand the composition of their own training data when it’s pulled from so many different sources across the entire internet. Even if they were, they wouldn’t be required to disclose that information to model users. This lack of data transparency is one reason that using publicly available AI models may not be appropriate for enterprises. However, there are ways to work around this. For instance, you can build proxy models, which are simple models used to approximate the results of your more complex AI models. Building a good proxy model requires you to balance the tradeoff between simplicity and accuracy. Nevertheless, even a very simple approximated model can help you understand how each feature of a model impacts its predictions. ... When it comes to building trust, it’s impossible to fully separate your AI models from the humans who use them. Humans naturally want to have some control over the tools they use; if you can’t give employees that sense of control, it’s unlikely they’ll continue to use AI.


Combating Cybercrime: What to Expect From Trump Presidency?

Trump is no stranger to combating cybercrime. His first administration updated the National Cyber Strategy for the first time in 15 years. "The administration will push to ensure that our federal departments and agencies have the necessary legal authorities and resources to combat transnational cybercriminal activity, including identifying and dismantling botnets, dark markets and other infrastructure used to enable cybercrime," it said. Especially where nation-state attacks are concerned, defending forward - disrupting malicious cyber activity at its source - has been U.S. military doctrine since 2018. But experts also see blemishes on Trump's cyber track record, including his axing the top cybersecurity coordinator role in the White House, weakening cyber diplomacy - a core strategy for tackling cybercrime safe havens - and firing the head of the Cybersecurity and Infrastructure Security Agency, which helps improve domestic resilience. Whether the U.S. continues its strategy of naming and shaming cybercriminals it can't reach, often in Russia, is unclear. Ian Thornton-Trump, a veteran CISO who formerly served with the Military Intelligence Branch of the Canadian Forces, predicts the administration could redirect resources to focus more on China and deemphasize the naming, shaming and disruption of Russian criminals' operations.


Transforming Enterprise Networks With AIOps: A New Era of Intelligent Connectivity

One of the primary benefits of AIOps is its ability to enhance intelligent network management. In today's complex network environments, optimizing performance and ensuring seamless connectivity in a continuously changing fabric is critical. AIOps provides insights across the various IT domains (e.g., application, security, infrastructure, etc.) that help networking professionals identify areas for improvement, automate routine tasks, and maintain optimal network performance. By leveraging AI-driven analytics, organizations can ensure that their networks are always running smoothly, reducing downtime and improving overall efficiency. ... AIOps is paving the way for the creation of autonomous networks that didn't materialize during the era of software-defined networking or intent-based networking. These initiatives claimed to create self-managing, self-healing networks that could adapt to changing conditions and demands with minimal human intervention, except a crucial element was missing: AIOps. AIOps highlights areas where automation can be implemented, allowing networks to respond dynamically to issues and changes in the environment. 


CIOs to spend ambitiously on AI in 2025 — and beyond

The big investments in generative AI may eventually rival traditional cloud investments but that does not mean top cloud providers — all of whom are top AI platforms providers — will suffer. Amazon Web Services, Microsoft Azure, and Google Cloud Platform are enabling the massive amount of gen AI experimentation and planned deployment of AI next year, IDC points out. ... “In the near term, most enterprises are focusing on automation and productivity use cases that can be implemented without fundamentally changing business processes,” McCarthy adds. “However, the higher value use cases involve new business models, which require widespread organizational change.” Stephen Crowley, senior advisor for S&L Ventures and former CIO of global technology solutions at Covetrus, still sees that future as a little way off. “Building the foundation is different from moving to production with AI apps. I think that will take longer,” he says. ... The risks of accidentally exposing sensitive corporate data or designing gen AI models that fly afield of their intended missions are also top of mind for Dairyland Power’s Melby, who is working with a Microsoft partner to deploy Copilot and Azure OpenAI capabilities to employees in a secure manner.


Building a workplace culture that supports mental well-being: A guide

Balancing productivity and employee well-being can be challenging but achievable. Creating a supportive work environment makes employees more likely to be engaged and productive. This approach not only benefits employees but also drives better business outcomes. ... To co-relate this model to mental health in the workplace, we can consider the inner foundational level as the basic needs and rights that must be met for employees, such as fair wages, job security, and a healthy work-life balance ... The inner circle or foundational level showcases the basic needs and rights of employees. Ensuring employees have fair wages, access to mental health resources, supportive management, and a healthy work-life balance. Avoid overwork, and reduce workplace stressors. The middle circle depicts evolved workplaces. A mental health mature workplace will invest in engaged employees, leaders who walk the talk, policies that ensure employees thrive, and psychological safety and the necessary infrastructure, sensitization and practice to support employee well-being and growth. The outer circle depicts a sustainable mental health positive culture that balances and is focused on achieving higher levels of maturity in culture and processes. 


7 reasons security breach sources remain unknown

As attacks become more sophisticated it can become more difficult to unpack the cause of problems, says Raj Samani, SVP and chief scientist at security firm Rapid7. “We must acknowledge that many threat groups take measures to obfuscate their tracks, invariably making any investigation more challenging,” he says. “However, this is often only part of the reason why identifying the source of the breach is so difficult.” Samani adds: “Whilst technologies will aid the investigation, the time spent retroactively reviewing such incidents often competes with the urgency of the next issue, or indeed, the demand to get the environment operational again.” Many breaches are detected long after they occur, and delays make it harder to identify root causes. Here, time is on the side of an attacker, with computer forensic capabilities fading over time as data is amended, overwritten, and deleted. “Hackers are always finding new ways to blend into regular network traffic, so even the best detection systems can end up playing a never-ending game of ‘whack-a-mole’ with threats,” says Peter Wood, CTO at Spectrum Search. “And while the systems might flag something suspicious, figuring out exactly where it started is another story altogether.”



Quote for the day:

“Creativity is thinking up new things. Innovation is doing new things.” -- Theodore Levitt

Daily Tech Digest - November 10, 2024

Technical Debt: An enterprise’s self-inflicted cyber risk

Technical debt issues vary in risk level depending on the scope and blast radius of the issue. Unaddressed high-risk technical debt issues create inefficiency and security exposure while diminishing network reliability and performance. There’s the obvious financial risk that comes from wasted time, inefficiencies, and maintenance costs. Adding tools potentially introduces new vulnerabilities, increasing the attack surface for cyber threats. A lot of the literature around technical debt focuses on obsolete technology on desktops. While this does present some risk, desktops have a limited blast radius when compromised. Outdated hardware and unattended software vulnerabilities within network infrastructure pose a much more imminent and severe risk as they serve as a convenient entry point for malicious actors with a much wider potential reach. An unpatched or end-of-life router, switch, or firewall, riddled with documented vulnerabilities, creates a clear path to infiltrating the network. By methodically addressing technical debt, enterprises can significantly mitigate cyber risks, enhance operational preparedness, and minimize unforeseen infrastructure disruptions. 


Why Your AI Will Never Take Off Without Better Data Accessibility

Data management and security challenges cast a long shadow over efforts to modernize infrastructures in support of AI and cloud strategies. The survey results reveal that while CIOs prioritize streamlining business processes through cloud infrastructures, improving data security and business resilience is a close second. Security is a persistent challenge for companies managing large volumes of file data and it continues to complicate efforts to enhance data accessibility. Nasuni’s research highlights that 49% of firms (rising to 54% in the UK) view security as their biggest problem when managing file data infrastructures. This issue ranks ahead of concerns such as rapid recovery from cyberattacks and ensuring data compliance. As companies attempt to move their file data to the cloud, security is again the primary obstacle, with 45% of all respondents—and 55% in the DACH region—citing it as the leading barrier, far outstripping concerns over cost control, upskilling employees and data migration challenges. These security concerns are not just theoretical. Over half of the companies surveyed admitted that they had experienced a cyber incident from which they struggled to recover. Alarmingly, only one in five said they managed to recover from such incidents easily. 


Exploring DORA: How to manage ICT incidents and minimize cyber threat risks

The SOC must be able to quickly detect and manage ICT incidents. This involves proactive, around-the-clock monitoring of IT infrastructure to identify anomalies and potential threats early on. Security teams can employ advanced tools such as security automation, orchestration and response (SOAR), extended detection and response (XDR), and security information and event management (SIEM) systems, as well as threat analysis platforms, to accomplish this. Through this monitoring, incidents can be identified before they escalate and cause greater damage. ... DORA introduces a harmonized reporting system for serious ICT incidents and significant cyber threats. The aim of this reporting system is to ensure that relevant information is quickly communicated to all responsible authorities, enabling them to assess the impact of an incident on the company and the financial market in a timely manner and respond accordingly. ... One of the tasks of SOC analysts is to ensure effective communication with relevant stakeholders, such as senior management, specialized departments and responsible authorities. This also includes the creation and submission of the necessary DORA reports.


What is Cyber Resilience? Insurance, Recovery, and Layered Defenses

While cyber insurance can provide financial protection against the fallout of ransomware, it’s important to understand that it’s not a silver bullet. Insurance alone won’t save your business from downtime, data loss, or reputation damage. As we’ve seen with other types of insurance, such as property or health insurance, simply holding a policy doesn’t mean you’re immune to risks. While cyber insurance is designed to mitigate financial risks, insurers are becoming increasingly discerning, often requiring businesses to demonstrate adequate cybersecurity controls before providing coverage. Gone are the days when businesses could simply “purchase” cyber insurance without robust cyber hygiene in place. Today’s insurers require businesses to have key controls such as multi-factor authentication (MFA), incident response plans, and regular vulnerability assessments. Moreover, insurance alone doesn’t address the critical issue of data recovery. While an insurance payout can help with financial recovery, it can’t restore lost data or rebuild your reputation. This is where a comprehensive cybersecurity strategy comes in — one that encompasses both proactive and reactive measures, involving components like third-party data recovery software.


Integrating Legacy Systems with Modern Data Solutions

Many legacy systems were not designed to share data across platforms or departments, leading to the creation of data silos. Critical information gets trapped in isolated systems, preventing a holistic view of the organization’s data and hindering comprehensive analysis and decision-making. ... Modern solutions are designed to scale dynamically, whether it’s accommodating more users, handling larger datasets, or managing more complex computations. In contrast, legacy systems are often constrained by outdated infrastructure, making it difficult to scale operations efficiently. Addressing this requires refactoring old code and updating the system architecture to manage accumulated technical debt. ... Older systems typically lack the robust security features of modern solutions, making them more vulnerable to cyber-attacks. Integrating these systems without upgrading security protocols can expose sensitive data to threats. Ensuring robust security measures during integration is critical to protect data integrity and privacy. ... Maintaining legacy systems can be costly due to outdated hardware, limited vendor support, and the need for specialized expertise. Integrating them with modern solutions can add to this complexity and expense. 


The challenges of hybrid IT in the age of cloud repatriation

The story of cloud repatriation is often one of regaining operational control. A recent report found that 25% of organizations surveyed are already moving some cloud workloads back on-premises. Repatriation offers an opportunity to address these issues like rising costs, data privacy concerns, and security issues. Depending on their circumstances, managing IT resources internally can allow some organizations to customize their infrastructure to meet these specific needs while providing direct oversight over performance and security. With rising regulations surrounding data privacy and protection, enhanced control over on-prem data storage and management provides significant advantages by simplifying compliance efforts. ... However, cloud repatriation can often create challenges of its own. The costs associated with moving services back on-prem can be significant: new hardware, increased maintenance, and energy expenses should all be factored in. Yet, for some, the financial trade-off for repatriation is worth it, especially if cloud expenses become unsustainable or if significant savings can be achieved by managing resources partially on-prem. Cloud repatriation is a calculated risk that, if done for the right reasons and executed successfully, can lead to efficiency and peace of mind for many companies.


IT Cost Reduction Strategies: 3 Unexpected Ways Enterprise Architecture Can Help

Easier said than done with the traditional process of manual follow-ups hampered by inconsistent documentation often scattered across many teams. The issue with documentation also often means that maintenance efforts are duplicated, wasting resources that could have been better deployed elsewhere. The result is the equivalent of around 3 hours of a dedicated employee’s focus per application per year spent on documentation, governance, and maintenance. Not so for the organization that has a digital-native EA platform that leverages your data to enable scalability and automation in workflows and messaging so you can reach out to the most relevant people in your organization when it's most needed. Features like these can save an immense amount of time otherwise spent identifying the right people to talk to and when to reach out to them, making a company's Enterprise Architecture the single source of truth and a solid foundation for effective governance. The result is a reduction of approximately a third of the time usually needed to achieve this. That valuable time can then be reallocated toward other, more strategic work within the organization. We have seen that a mid-sized company can save approximately $70 thousand annually by reducing its documentation and governance time.


How Rules Can Foster Creativity: The Design System of Reykjavík

Design systems have already gained significant traction, but many are still in their early stages, lacking atomic design structures. While this approach may seem daunting at first, as more designers and developers grow accustomed to working systematically, I believe atomic design will become the norm. Today, most teams create their own design systems, but I foresee a shift toward subscription-based or open-source systems that can be customized at the atomic level. We already see this with systems like Google’s Material UI, IBM’s Carbon, Shopify’s Polaris, and Atlassian’s design system. Adopting a pre-built, well-supported design system makes sense for many organizations. Custom systems are expensive and time-consuming to build, and maintaining them requires ongoing resources, as we learned in Reykjavík. By leveraging a tried-and-tested design system, teams can focus on customization rather than starting from scratch. ontrary to popular belief, this shift won’t stifle creativity. For public services, there is little need for extreme creativity regarding core functionality - these products simply need to work as expected. AI will also play a significant role in evolving design systems.


Eyes on Data: A Data Governance Study Bridging Industry and Academia

The researcher, Tony Mazzarella, is a seasoned data management professional and has extensive experience in data governance within large organizations. His professional and research observations have identified key motivations for this work: Data Governance has a knowledge problem. Existing literature and publications are overly theoretical and lack empirical guidance on practical implementation. The conceptual and practical entanglement of governance and management concepts and activities exacerbates this issue, leading to divergent definitions and perceptions that data governance is overly theoretical. The “people” challenges in data management are often overlooked. Culture is core to data governance, but its institutionalization as a business function coincided first in the financial services industry with a shift towards regulatory compliance in response to the 2008 financial crisis. “Data culture” has re-emerged in all industries, but it implies the governance function is tasked with fostering culture change rather than emphasizing that data governance requires a culture change, which is a management challenge. Data Management’s industry-driven nature and reactive ethos result in unnecessary change as the macroenvironment changes, undermining process resilience and sustainability.


The future of data center maintenance

Condition-based maintenance and advanced monitoring services provide operators with more information about the condition and behavior of assets within the system, including insights into how environmental factors, controls, and usage drive service needs. The ability to recommend actions for preventing downtime and extending asset life allows a focus on high-impact items instead of tasks that don't immediately affect asset reliability or lifespan. These items include lifecycle parts replacement, optimizing preventive maintenance schedules, managing parts inventories, and optimizing control logic. The effectiveness of a service visit can subsequently be validated as the actions taken are reflected in asset health analyses. ... Condition-based maintenance and advanced monitoring services include a customer portal for efficient equipment health reporting. Detailed dashboards display site health scores, critical events, and degradation patterns. ... The future of data center maintenance is here – smarter, more efficient, and more reliable than ever. With condition-based maintenance and advanced monitoring services, data centers can anticipate risks and benchmark assets, leading to improved risk management and enhanced availability.



Quote for the day:

"It's not about how smart you are--it's about capturing minds." -- Richie Norton

Daily Tech Digest - November 09, 2024

How the infrastructure industry is leveraging AI and digital twins

The challenges in scaling up the adoption of AI-powered digital twins across the infrastructure sector are multifaceted. First, engineering firms often struggle to obtain clear requirements from owner-operators. While these firms manage design and sometimes construction, they rely on owner-operators to request a digital twin as part of the final infrastructure asset. However, this willingness to adopt digital twins is still lacking in some regions and sectors. Second, many engineering firms need more support due to the high demand for infrastructure. Cumins emphasizes, “This resource constraint makes it more difficult for firms to invest in and effectively implement AI-powered digital twins.” The increasing backlog of projects leaves little time for firms to adopt new technologies and change their workflows. The third and more fundamental challenge is access to historical data, which is crucial for training AI models. “For instance,” Cumins explains, “we train our AI agents using Bentley’s software, which teaches the rules of various engineering disciplines, such as structural and geotechnical engineering. Engineering firms can then fine-tune these AI agents using their historical data and project conditions.”


Serverless computing’s second act

Despite its early hurdles, serverless computing has bounced back, driven by a confluence of evolving developer needs and technological advancements. Major cloud providers such as AWS, Microsoft Azure, and Google Cloud have poured substantial resources into serverless technologies to provide enhancements that address earlier criticisms. For instance, improvements in debugging tools, better handling of cold starts, and new monitoring capabilities are now part of the serverless ecosystem. Additionally, integrating artificial intelligence and machine learning promises to expand the possibilities of serverless applications, making them seem more innovative and responsive. ... One crucial question remains: Is this resurgence enough to secure the future of serverless computing, or is it simply an attempt by cloud providers to recoup their significant investments? At issue is the number of enterprises that have invested in serverless application development. As you know, this investment goes beyond just paying for the serverless technology. Localizing your applications using this tech and moving to other platforms is costly. A temporary fix might not suffice in the long run. While the current trends and forecasts are promising, the final verdict will largely depend on how serverless can overcome past weaknesses and adapt to emerging technological landscapes and enterprise needs. 


GenAI’s Impact on Cybersecurity

GenAI is both a blessing and a curse when it comes to cybersecurity. “On the one hand, the incorporation of AI into security tools and technologies has greatly enhanced vendor tooling to provide better threat detection and response through AI-driven features that can analyze vast amounts of data, far quicker than ever before, to identify patterns and anomalies that signal cyber threats,” says Erik Avakian, technical counselor at Info-Tech Research Group. “These new features can help predict new attack vectors, detect malware, vulnerabilities, phishing patterns and other attacks in real-time, including automating the response to certain cyber incidents. This greatly enhances our incident response processes by reducing response times and allowing our security analysts to focus on other and more complex tasks.” ... Meanwhile, hackers and hacking groups have already incorporated AI and large language modeling (LLM) capabilities to carry out incredibly sophisticated attacks, such as next-generation phishing and social engineering attacks using deep fakes. “The incorporation of voice impersonation and personalized content through ‘deepfake’ attacks via AI-generated videos, voices or images make these attacks particularly harder to detect and defend against,” says Avakian.


Has the Cybersecurity Workforce Peaked?

Jobseekers are likely also running afoul of the trend in ghost-job posting. Nearly half of hiring managers have admitted to keeping job postings open, even when they are not looking to fill a specific position. That's being used as a way to keep employees motivated, give the impression the company is growing, or to placate overworked employees, according to a survey conducted by Clarify Capital. These ghost jobs are a significant problem for cybersecurity job seekers in particular, with one resume site estimating that 46% of listings for a cybersecurity analyst in the United Kingdom were positions that would never be filled--compared with about a third for all roles. ... Those economic pressures are another reason that purported jobs are not materializing, says Jon Brandt, director of professional practices and innovation at ISACA, an information-technology certification organization. "People can respond to any survey and say, hey, we have a need for 20 more people," he says. "But at the end of the day, unless an organization is taking active steps to hire, then that's not a data point we should be looking at right now." For entry-level workers without significant experience, the picture is especially grim. Cyberseek's career pathway data shows that demand for workers resembles a reverse pyramid. 


You Have an SBOM — What Are the Next Steps?

To maximize SBOM benefits, integrate them into your SDLC and automate the process whenever possible. This ensures real-time updates, maintaining accuracy as your software evolves. Regular updates reduce the risk of outdated data, enhancing transparency and security. Automating SBOM creation by integrating them into CI/CD pipelines ensures an SBOM with each build, providing a reliable record of software components. By setting up quality gates in your CI/CD workflows, you can scan SBOMs for security vulnerabilities and licensing issues, stopping noncompliant components from moving forward in deployment. During quality assurance (QA), SBOMs are vital for ensuring compliance and security before release. They ensure each release meets industry standards and best practices. By integrating SBOMs into CI/CD and QA processes, development teams establish a robust framework for transparency and compliance, boosting software supply chain security at all stages. ... Effective SBOM management extends beyond the development phase. Once in production, SBOMs need to be continuously monitored to ensure ongoing security and compliance, especially as new vulnerabilities emerge.


AI-Powered Enterprise Architecture: A Strategic Imperative

AI can significantly enhance EA reusable knowledge repositories, architecture diagrams, and visualizations by analyzing real-time and historical projects, programs, solution designs, and other relevant data to identify anomalies, bottlenecks, and optimization opportunities in designing robust technology solutions. AI-powered solution design monitoring systems could detect a sudden increase in website traffic and automatically scale server resources to handle the increased load; such measures could have cost and end-user experience issues that could potentially impact business. Technical experts can apply the insights learned to ensure the future design of robust solutions by considering aspects of application behavior that may not have been considered. AI can streamline the architecture design process by generating multiple design options, simulating different scenarios, and optimizing designs based on performance and cost. Using generative design techniques, AI can create innovative and efficient solution design patterns that would be difficult or non-pragmatic to achieve through traditional methods. For example, an AI-powered design tool could generate multiple network designs, each with different topologies and configurations, and then evaluate the performance and cost of each design to identify the optimal solution.


How enterprises can identify and control AI sprawl

AI sprawl refers to the uncontrolled proliferation of AI tools across an organization. Just like with cybersecurity, there are too many tools solving too many things without centralized oversight. This leads to inefficiencies, redundancies, and significant security risks. For instance, various departments, like sales and marketing, might independently adopt different AI solutions for similar problems, but these solutions don’t integrate or align with each other. This increases costs and operational inefficiencies. AI sprawl also raises governance challenges, making it difficult to ensure data quality, consistency, and security. ... CIOs are in a unique position because they oversee multiple functions while CTOs tend to focus more on the engineering side of the product. At Nutanix, we’re adopting a centralized AI governance approach. We’ve established a cross-functional committee to take inventory of all existing AI tools and develop a unified strategy. This includes creating policies, frameworks, and best practices that align with the company’s overall objectives. ... With AI tools spread across an organization it’s difficult to ensure data quality and security. Each tool might store or process data in different ways, potentially exposing sensitive information and increasing the risk of compliance violations, such as GDPR breaches. 


Strengthening OT Cybersecurity in the Age of Industry 4.0

Historically, OT systems were not considered significant threats due to their perceived isolation from the Internet. Organizations relied on physical security measures, such as door locks, passcodes, and badge readers, to protect against hands-on access and disruption to physical operational processes. However, the advent of the 4th Industrial Revolution, or Industry 4.0, has introduced smart technologies and advanced software to optimize efficiency through automation and data analysis. This digital transformation has interconnected OT and IT systems, creating new attack vectors for adversaries to exploit and access sensitive data. ... First, security leaders should isolate OT networks from IT networks and the Internet to limit the attack surface and verify that the networks are segmented. This should be monitored 24/7 to ensure network segmentation effectiveness and proper functioning of security controls. This containment strategy helps prevent lateral movement within the network during a breach. Real-time network monitoring and the appropriate alert escalation (often notifying the plant supervisor or controls engineer who are in the best position to verify if access or a configuration change is appropriate and planned, not the IT SOC) aids in the rapid detection and response to threats. 


Tips for making sure your AI-powered FP&A efforts are successful

One of the biggest problems with AI is the issue of security. Many finance teams hesitate to embrace AI solutions out of concerns that they could undermine data privacy or weaken data security. Data security is important, as handling large amounts of sensitive information requires robust protection measures. These concerns are well-founded, too – last year, Samsung banned employees from using third-party GenAI tools after ChatGPT leaked sensitive data. International regulations are also catching up with AI and establishing requirements around data privacy and security. It’s important to build clear policies around data use, set up and regularly review access permissions, and establish logging and monitoring to track unauthorised use or data access. Consult international best practices for AI-related data privacy, because they are likely to strongly inform evolving compliance regulations and put their recommendations into practice. ... The best AI tools in the world won’t be much use if your finance teams avoid actually using them. Many employees are nervous that AI could take over their jobs and/or distrust the tech, which leads them to ignore AI-powered insights. Using AI tools effectively also requires digital literacy and technical skills that may be lacking among your employees.


Mind the Gap: Migration Projects – Gaining Traction or Spinning Your Wheels

Think about your migration project like running a marathon in rented shoes. (I know, I know. It’s not a photo-realistic example, but stick with me. You’ll get the point.) You start out with some good shoes, but they’re very expensive. Comfortable and well-fitting, but expensive. At, say, the 10-mile marker you have the opportunity to swap out your shoes. The ones you have are expensive and you don’t want to keep spending the money. Besides, you’re doing fine. So, you stop, select a less expensive pair, and put them on. All the while, the clock is ticking and you’re not making any progress toward the finish line. You’re betting on the expectation that you’ll make up the lost time by running the remainder of the race faster. The shoes are cheaper, but they don’t fit as well, and after a few miles your feet begin to hurt. Your pace slows considerably. You finish the race. Eventually. Far short of your goal, blood soaking through your socks, and far slower than had you not migrated. As you hobble back home with your disappointing result, you can console yourself with the money you saved as you try to convince yourself that it was worth it.



Quote for the day:

“Identify your problems but give your power and energy to solutions.” -- Tony Robbins