Showing posts with label disinformation. Show all posts
Showing posts with label disinformation. Show all posts

Daily Tech Digest - December 12, 2025


Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas



Escaping the transformation trap: Why we must build for continuous change, not reboots

Each new wave of innovation demands faster decisions, deeper integration and tighter alignment across silos. Yet, most organizations are still structured for linear, project-based change. As complexity compounds, the gap between what’s possible and what’s operationally sustainable continues to widen. The result is a growing adaptation gap — the widening distance between the speed of innovation and the enterprise’s capacity to absorb it. CIOs now sit at the fault line of this imbalance, confronting not only relentless technological disruption but also the limits of their organizations’ ability to evolve at the same pace. ... Technical debt has been rapidly amassing in three areas: accumulated, acquired, and emergent. The result destabilizes transformation efforts. ... Most modernization programs change the surface, not the supporting systems. New digital interfaces and analytics layers often sit atop legacy data logic and brittle integration models. Without rearchitecting the semantic and process foundations, the shared meaning behind data and decisions, enterprises modernize their appearance without improving their fitness. ... The new question is not, ‘How do we transform again?’ but ‘How do we build so we never need to?’ That requires architectures capable of sustaining and sharing meaning across every system and process, which technologists refer to as semantic interoperability.


The state of AI in 2026 – part 1

“The real race will be about purpose, measurable outcomes and return on investment. AI is no longer simply a technical challenge, it has become a business strategy,” said Zaccone. “However, this evolution comes with new risks. As agentic systems gain autonomy, securing the underlying AI infrastructure becomes critical. Standards are still emerging, but adopting strong security and governance practices early dramatically increases the likelihood of success. At the same time, AI is reshaping the risk landscape faster than regulation can adapt, which means it’s raising pressing questions around data sovereignty, compliance and access to AI-generated data across jurisdictions.” ... “Many teams now face practical limits around data quality, compute efficiency and responsible integration with existing systems. There is a clear gap between those who just wrap APIs around foundation models and those who actually optimise architectures and training pipelines. The next phase of AI is about reliability, interpretability and building systems that engineers can trust and improve over time,” Khan said. ... “To close the gap between the vision and reality of agentic AI over the next 12 months, enterprise agentic automation (EAA) will be essential. By blending dynamic AI with determinist guardrails and human-in-the-loop checkpoints, EAA empowers enterprises to automate complex, exception-heavy or cognitive work without losing control,” explained Freund.


Cybersecurity isn’t underfunded — It’s undermanaged

Of course, cybersecurity projects are often complex because they need to reach across corporate silos and geographies to deliver effective protection to the business. This is not natural in large firms, which are, almost by essence, territorial and political. But beyond that, the profile of CISOs is also a key dimension: Most are technologists by trade and background, and have spent the last decade firefighting incidents, incapable of building or delivering any kind of long-term narrative. They have not developed the type of management experience, political finesse or personal gravitas that they would require to be truly successful, now that the spotlight is firmly on them from the top of the firm. Many genuinely think that chronic under-investment in cybersecurity is the root cause of insufficient maturity levels, while it is in fact chronic execution failure linked to endemic business short-termism that is at the heart of the matter: All point to governance and cultural aspects that are the real root causes of the long-term stagnation of cybersecurity maturity levels in large firms. For the CISOs who have not integrated those cultural aspects and are almost always left out of those decisions, it breeds frustration; frustration breeds short tenures; short tenures aggravate the management and leadership mismatch: You cannot deliver much of genuine transformative impact in large firms on those timeframes.


Document databases – understanding your options

There are two decisions to take around databases today—what you choose to run, and how you choose to run it. The latter choice covers a range of different deployment options, from implementing your own instance of a technology on your own hardware and storage, through to picking a database as a service where all the infrastructure is abstracted away and you only see an API. In between, you can look at hosting your own instances in the cloud, where you manage the software while the cloud service provider runs the infrastructure, or adopt a managed service where you still decide on the design but everything else is done for you. ... The first option is to look at alternative approaches to running MongoDB itself. Alongside MongoDB-compatible APIs, you can choose to run different versions of MongoDB or alternatives to meet your document database needs. ... The second migration option is to use a service that is compatible with MongoDB’s API. For some workloads, being compatible with the API will be enough to move to another service with minimal to no impact. ... The third option is to use an alternative document database. In the world of open source, Apache CouchDB is another document database that works with JSON and can be used for projects. It is particularly useful where applications might run on mobile devices as well as cloud instances; mobile support is a feature that MongoDB has deprecated.


Why AI Fatigue Is Sending Customers Back to Humans

The pattern is familiar across industries: digital experiences that start strong, then steadily degrade as companies prioritize cost-cutting over satisfaction. In banking, this manifests in frustratingly specific ways: chatbots that loop through unhelpful responses, automated fraud alerts that lock accounts without a path to resolution, and phone trees that make reaching a human nearly impossible. ... The path forward for community banks and credit unions isn’t choosing between digital efficiency and human service or retreating to nostalgia for branch-based banking. It’s investing strategically in both. ... Geographic proximity enables genuine empathy that algorithms can’t replicate. Rajesh Patil, CEO at Digital Agents Service Organization (CUSO), offers an example: “When there’s a disaster in a community, an AI chatbot doesn’t know what happened. But a local branch employee knows and can say, ‘I understand. Let me help you.'” The most sophisticated community bank strategy uses technology to identify opportunities while humans deliver the insight. ... After decades of pursuing digital transformation, community banks and credit unions are discovering their competitive advantage was human all along. But the path forward isn’t nostalgia for branch-based banking, it’s strategic investment in both digital infrastructure and human capacity.


The Cloud Investment Paradox: Why More Spending Isn’t Delivering AI Results

There are three common gaps that stall AI progress, even after significant cloud spend. First is data architecture. Many organisations lift and shift legacy systems into the cloud without rethinking how data will flow across teams and tools. They end up with the same fragmentation problems, just in a new environment. Second is the skills gap. Research has found that 27% of organisations lack the internal expertise to harness AI’s potential. And it is not just data scientists. You need cloud architects who understand how to design environments specifically for AI workloads, not just generic compute. Third is data quality and accessibility. AI models cannot perform well without clean, consistent input. But too often, data governance is an afterthought. Only 1 in 5 organisations feel confident that their data is truly AI-ready. That is a foundational issue, not a fine-tuning one. ... Before investing in another AI pilot or data science hire, organisations should take a step back. Is the data ready? Are the pipelines in place? Do internal teams have what they need to turn compute into insight? This means prioritising data integration and governance before algorithms. It means investing in internal training and hiring with long-term capability in mind. And it means treating cloud and AI as part of the same strategy, not separate silos.


Beyond the login: Why “identity-first” security is leaking data and why “context-first” is the fix

The uncomfortable truth emerging from recent high-profile breaches is that identity-first security—when operating in isolation—is leaking data. Threat actors have evolved; they are no longer just trying to break down the door; they are cloning the keys. The reliance on static authentication events has created a dangerous blind spot. ... Standard facial recognition often looks for geometric matches—distance between eyes, shape of the nose. Deepfakes can replicate this perfectly, turning video verification into a vulnerability rather than a safeguard. To counter this, modern security must implement advanced “Liveness Detection”. It is no longer enough to match a face to a database; the system must analyse micro-expressions and texture to ensure the face belongs to a live human presence, not a digital puppet. Yet, even with these safeguards, betting the entire security posture solely on verifying who the user is, remains a risky strategy. ... To stop these leaks, security must move beyond the “Who” (Identity) and interrogate the “Where,” “What,” and “How” (Context). This requires a shift from static gates to Continuous Adaptive Trust. Context is not a single data point; it is a composite score derived from real-time telemetry. ... For technology leaders, this convergence is not just a technical upgrade; it is a strategic necessity for compliance. Frameworks like the Digital Personal Data Protection (DPDP) Act require organisations to implement “reasonable security safeguards”. 


Why Critical Infrastructure Needs Security-Forward Managed File Transfer Now

Today’s cyber attackers often use ordinary documents and files to breach organizations. Without strong security checks, it’s surprisingly easy for bad actors to cause major problems. Attacks exploit both common file formats and weaknesses in legacy operational technology (OT) environments. ... Modern managed file transfer (MFT) requires a layered security approach to effectively combat file-based threats and comply with best practices. This approach dictates that organizations must encrypt files at rest and in transit, employ strong hash checks, and use digital signing to validate the origin and integrity of files throughout their lifecycle. ... Many MFT tools incorporate multi-layered malware scanning. This works by scanning every file with multiple malware engines rather than relying on a single one, given that different engines detect different malware families and variants.​ Parallel multiscanning not only improves detection rates but also shortens the window for exploitation of zero‑day vulnerabilities and polymorphic malware. This helps to reduce the chance of false negatives before files enter sensitive networks.​ The scanning should be directly integrated into upload, download, and workflow steps so no file can move between zones without passing through a multi‑engine inspection pipeline.​ ... MFT workflows can automatically route files to a sandbox based on risk scores, file types, sender reputation, or country of origin. Then, files are only released upon passing behavioral checks.​ 


Fight AI Disinformation: A CISO Playbook for Working with Your C-Suite

Unlike misinformation or malinformation, which may be inaccurate or misleading but not necessarily harmful, disinformation is both false and designed specifically to damage organizations. It can be episodic, targeting individuals for immediate gain, such as tricking an employee into transferring funds via a deepfaked call. It can also be industrial, operating at scale to undermine brand reputation, manipulate stock prices, or probe organizational defenses over time. The attack surfaces are broad: internally, adversaries exploit corporate meeting solutions, email, and messaging platforms to bypass authentication and impersonate trusted individuals. ... Without clear ownership and cross-functional collaboration, efforts to counter disinformation are often disjointed and ineffectual. In some cases, organizations leave disinformation as an unmanaged risk, exposing themselves to episodic attacks on individuals and industrial campaigns targeting reputation and financial stability. Another common pitfall is failing to differentiate between types of information threats. CISOs should focus their resources on disinformation where intent to harm and lack of accuracy intersect, rather than attempting to police all forms of misinformation or malinformation. ... CISOs must lead the way in communicating the risks and fostering a culture of shared responsibility, engaging all employees in detection, reporting, and response. This includes developing internal tooling for monitoring and reporting, promoting transparency, and ensuring ongoing education about evolving threats.


Why AI Scaling Innovation Requires an Open Cloud Ecosystem

Developers and enterprises should have the flexibility to construct custom multi-cloud infrastructure that provides the appropriate specifications. Distributing workloads allows them to move faster on new projects without driving up infrastructure spend and overconsuming resources. It also enables them to prioritize in-country data residency for enhanced compliance and security. With an open ecosystem, developers and enterprises can stagger cloud-agnostic applications across a mosaic of public and private clouds to optimize hardware efficiency, maintain greater autonomy in data management and data security, and run applications seamlessly at the edge. This promotes innovation at all layers of the stack, from training to testing to processing, making it easier to deploy the best possible services and applications. An open ecosystem also reduces the branding and growth risks associated with hyperscaler dependence. Often, when a developer or enterprise runs their products exclusively on a single platform, they become less their own product and more an outgrowth of their hyperscaler cloud provider; instead of selling their app on its own, they sell the hyperscaler’s services. ... Supporting hyper-specific AI use cases often begets complex development demands: from hefty compute power, to multi-model frameworks, to strict data governance and pristine data quality. Even large enterprises don’t always have the resources in-house to account for these parameters.

Daily Tech Digest - October 12, 2025


Quote for the day:

"Trust because you are willing to accept the risk, not because it's safe or certain." -- Anonymous



AI and Data Governance: The Power Duo Reshaping Business Intelligence

Fortunately, the relationship between AI and data governance isn’t one-sided. By leveraging automation, pattern recognition, and real-time analytics, AI enables organizations to manage data quality, compliance, and security more effectively. AI models can identify inaccuracies or inconsistencies, flag anomalies, and automatically correct missing or duplicate records, minimizing the risk of generating misleading results from poor-quality datasets. It can track organizational data in real time, ensuring accurate classification of sensitive information, enforcing access controls, and proactively identifying policy violations before they escalate. This approach enables organizations to move away from manual auditing and adopt automated, self-correcting governance workflows. ... To leverage the full potential of the relationship between AI and governance, organizations must establish a continuous feedback loop between their governance frameworks and AI systems. AI shouldn’t function independently; it must be constantly updated and aligned with governance policies to maintain accuracy, transparency, and compliance. One of the best ways to achieve this is by using intelligent data platforms such as Semarchy’s master data management (MDM) and data catalog solutions. These solutions unify and control AI data from a trusted, single source of truth, ensuring consistency across business functions.


Building cyber resilience in a volatile world

Supply chain attacks show just how fragile the ecosystem can be, given that when one link breaks, the shockwaves ripple across agencies and sectors. That’s why the shift away from outmoded ideas of “prevention” by building walls around environments to a new kind of resilience is so stark. For example, zero trust is no longer optional; it’s the baseline. Verification must be constant, and assumptions about “safe” internal networks belong in the past. Meanwhile, AI governance and quantum-resistant cryptography have jumped from academic conversations to immediate government standards. Institutional muscle is being flexed too.  ... The transformation ahead is as much cultural as technical. Agencies must shift from being static defenders to dynamic operators, and need to be ready to adapt, recover, and press on even as attacks intensify. Cybersecurity is not just another line item in the IT budget, but rather the backbone of national resilience. The ability to keep delivering services, protect citizen trust, and safeguard critical infrastructure is now inseparable from how well agencies manage cyber risk. Resilience is not built by chance. It’s built through strategy, investment, and relentless partnership. It means turning frameworks into live capability, leveraging industry expertise, and embedding a mindset that sees cyber not as a constraint but as a foundation for confidence and continuity.


Fighting Disinformation Demands Confronting Social and Economic Drivers

Moving beyond security theater requires embracing ideological critique as a foundational methodology for information integrity policy research. This means shifting from “how do we stop misinformation?” to “what material and symbolic interests does information serve, and how do power relations shape what counts as legitimate knowledge?” This approach demands examining not just false information, but the entire apparatus through which beliefs become hegemonic, others verboten. Ideological critique offers three analytical tools absent from current information integrity policy research. First, it provides established scholarly techniques for examining how seemingly neutral technical systems encode worldviews and serve specific class interests. Platform algorithms, content moderation policies, and fact-checking systems all embed assumptions about authority, truth, and social order that more often than not favor existing power arrangements. Second, it offers frameworks for understanding how dominant groups maintain cognitive hegemony: the ability to shape not just what people think, but how they think. Third, it provides tools for analyzing how groups develop counter-hegemonic consciousness, alternative meaning-making systems and their ‘hidden transcripts’. Adopting these techniques can craft better policy responses to disinformation.


Cloud Infrastructure Isn't Dead, It's Just Becoming Invisible

Let's be honest: most cloud platforms are more alike than different. Storage, compute, and networking are commoditized. APIs are standard. Reliability and scalability is expected. Most agree that the cloud itself is no longer a differentiator, it's a utility. That's why the value is moving up the stack. Engineers don't need more IaaS, they need better ways to work with it. They want file systems that feel local, even when they're remote. They want zero-copy collaboration and speed. And they want all of that without worrying about provisioning, syncing, or latency. Today, cloud users are shifting their expectations toward solutions that utilize standard infrastructure such as object storage and virtual servers, yet abstract away the complexity. The appeal is in performance and usability improvements that make infrastructure feel invisible. ... What makes this shift important is that it's rooted in practical need. When you're working with terabytes or petabytes of high-resolution video, training a model on noisy real-world data, or collaborating across time zones on a shared dataset, traditional cloud workflows break down. Downloading files locally isn't scalable, and copying data between environments wastes time and resources. Latency is a momentum killer. This is where invisible infrastructure shines. It doesn't just abstract the cloud, it makes it better suited to the way developers actually build and collaborate today.


The great misalignment in business transformation

It’s easy to point the finger at artificial intelligence (AI) for today’s disruption in the tech workforce. After all, AI is changing how coding, analysis and even project management are done. Entire categories of tasks are being automated. Advocates argue that workers will inevitably be replaced, while critics frame it as the next wave of technological unemployment. Recent surveys have shown that employee optimism is fading. ... The problem is compounded by the emphasis on being “more artistic” or “more technical.” Both approaches miss the mark. Neither artistry for its own sake nor hyper-technical detail guarantees relevance if business problems remain unsolved. The technology industry has always experienced cycles of boom and bust. From the dot-com bubble to the recent AI surge, waves of hiring and layoffs are nothing new. What is new, however, is the growing realization that some jobs may not need to come back at all. ... Analysis without insight devolves into repetitive reporting, adding noise rather than clarity. Creativity without business grounding drifts into theatre, producing workshops and “innovation sessions” that inspire but fail to deliver results. Both are missing the target. Worse still, companies have proven they can operate without many of these roles altogether. The lesson is clear: being more artistic or more technical is not the answer. 


The Architecture Repository: Turning Enterprise Architecture into a Strategic Asset

While the Enterprise Continuum provides the context — a spectrum from generic to organization-specific models — the Architecture Repository provides the structure to store, manage, and evolve those models. ... At the heart of the repository lies the Architecture Metamodel. This is the blueprint for how architectural content is structured, related, and interpreted. It defines the vocabulary, relationships, and rules that govern the creation and classification of artifacts. The metamodel ensures consistency across the repository. Whether you’re modeling business processes, application components, or data flows, the metamodel provides a common language and structure. It’s the foundation for traceability, reuse, and integration. In practice, the metamodel is tailored to the organization’s needs. It reflects the enterprise’s modeling standards, governance policies, and stakeholder requirements. It’s not just a technical artifact — it’s a strategic enabler of clarity and coherence. ... Architecture must respond to real needs. The Architecture Requirements Repository captures all authorized requirements — business drivers, stakeholder concerns, and regulatory mandates — that guide architectural development. ... Architecture is not just about models — it’s about solutions. The Solutions Landscape presents the architectural representation of Solution Building Blocks (SBBs) that support the Architecture Landscape.


Cyberpsychology’s Influence on Modern Computing

Psychological research on decision making and cognitive processes has been fundamental to understanding perceptions and behavior in the areas of cybersecurity and cyberprivacy. Much of this work focuses on cognitive biases and emotional states, which inform the actions of both users and attackers. ... Both cognition and affect play a role in these phenomena. Specifically, under conditions of diminished information processing—such as in the case of cognitive demands or affective experiences such as a positive mood state—people are less likely to make decisions based on strongly held beliefs. For example, a consumer’s positive emotional state, such as happiness with the Internet, mediates the negative effects of information-collection concerns on their willingness to disclose personal information. Interestingly, cybersecurity experts are as vulnerable to phishing and social engineering attacks as those who are not cybersecurity experts. A deep understanding of the perceptual, cognitive, and emotional mechanisms that result in lapses of judgment or even behavior incongruent with one’s intellectual understanding is vital to minimizing such threats. In addition to cognitive and emotional states, personality models have provided insight into human behavior vis á vis technology. The “big five” personality theory, also known as the five-factor model, is a widely accepted framework that has been applied to a broad range of cyber-related behaviors, including cybersecurity.


The Cybersecurity Skills Gap and the Role of Diversity

Cybersecurity is often presented as a technically demanding field, she points out. “This further discourages some women from first entering the industry. For those who have, it’s then about being able to continue growing their careers when they may feel challenged by perceived technical demands,” says Pinkard. And today, cybersecurity is not a purely technical subject. Demand for technical skills will always exist, but the job has changed, says Amanda Finch, CEO, The Chartered Institute for Information Security. ... While the low number of women in cybersecurity is concerning, it’s also important to consider how other types of diversity can help fill the skills gap in the workforce. Inclusion and opportunity is “100% about more than just bringing in more women”: “It's about the different life perspective,” says Pinkard. Those “lived perspectives” are driven by areas such as neurodiversity, ethnic diversity and physical ability diversity, she says. ... Too many companies still treat diversity as a compliance exercise, says Mullins. “When it was no longer a legal requirement in the US, many simply stopped. Others will say, ‘we want more women’, but won’t update their maternity policies and complain that only men apply to their roles. Or they say ‘we want neurodiverse talent’, but resist implementing more flexible working policies to facilitate them.” 


Data quality is no longer optional

AI systems can only be as good as the data that feeds them. When information is incomplete, inconsistent or trapped in silos, the insights and predictions those systems produce become unreliable. The risk is not just missed opportunities but strategic missteps that erode customer trust and competitive positioning. ... Companies with a strong digital foundation are already ahead in AI adoption, and those without risk drowning in information while starving their AI models of the clean, reliable inputs they need. But before any organisation can realise AI’s full potential, it must first build a resilient data foundation, and the enterprises that place data quality at the heart of their digital strategy are already seeing measurable gains. By investing in robust governance, integrating AI with data management and removing silos across departments, they create connected teams and more agile operations. ... Raising data quality is not a one-off exercise; it requires a cultural shift that calls for collaboration across IT, operations and business units. Leaders must set clear standards for how data is captured, cleaned and maintained, and champion the idea that every employee is a steward of data integrity. The long-term challenge is to design data architectures that can support scale and complexity and embrace distributed paradigms that support interoperability. These architectures do more than maintain order. 


Shadow AI in Your Systems: How to Detect and Control It

"Shadow AI" is when people in an organization use AI tools like generative models, coding assistants, agentic bots, or third-party LLM services without getting permission from IT or cybersecurity. This is the next step in the evolution of "shadow IT," but the stakes are higher because models can read sensitive text, make API calls on their own, and do automated tasks across systems. Industry definitions and primers say that shadow AI happens when employees use AI apps without official supervision, which can lead to data leaks, privacy issues, and compliance problems. ... Agents that automate web interactions usually need credentials, API keys, or tokens to do things for employees. Agents can get into systems directly if keys are poorly managed or embedded in scripts. ... Queries are outbound traffic to known AI provider endpoints, nonstandard hostname patterns, or unusual POST bodies. Modern proxy and firewall logs often show ULRs and headers that show which model vendors are being used. Check your web gateway and proxy logs for spikes in API calls and endpoints that you don't know about. ... Agents often do a lot of navigations, clicks, and form submissions in a short amount of time, which is different from how people do it. Look for patterns in how people navigate, intervals that are always the same, or pages that are crawled in tight loops.

Daily Tech Digest - October 07, 2025


Quote for the day:

"There is only one success – to be able to spend your life in your own way." -- Christopher Morley



5 Critical Questions For Adopting an AI Security Solution

An AI-SPM solution must be capable of seamless AI model discovery, creating a centralized inventory for complete visibility into deployed models and associated resources. This helps organizations monitor model usage, ensure policy compliance, and proactively address any potential security vulnerabilities. By maintaining a detailed overview of models across environments, businesses can proactively mitigate risks, protect sensitive data, and optimize AI operations. ... An effective AI-SPM solution must tackle risks that are specific to AI systems. For instance, it should protect training data used in machine learning workflows, ensure that datasets remain compliant under privacy regulations, and identify anomalies or malicious activities that might compromise AI model integrity. Make sure to ask whether the solution includes built-in features to secure every stage of your AI lifecycle—from data ingestion to deployment. ... When evaluating an AI-SPM solution, ensure that it automatically maps your data and AI workflows to governance and compliance requirements. It should be capable of detecting non-compliant data and providing robust reporting features to enable audit readiness. Additionally, features like automated policy enforcement and real-time compliance monitoring are critical to keeping up with regulatory changes and preventing hefty fines or reputational damage.


The architecture of lies: Bot farms are running the disinformation war

As bots become more common and harder to tell from real users, people start to lose confidence in what they see online. This creates the liars dividend, where even authentic content is questioned simply because everyone knows fakes are out there. If any critical voice or inconvenient fact can be dismissed as just a bot or a deepfake, democratic debate takes a hit. AI-driven bots can also create the illusion of consensus. By making a hashtag or viewpoint trend, they create the impression that everyone is talking about it, or that an extreme position enjoys broader support than it appears to have.  ... It’s still an open question how well online platforms stop malicious, bot-driven content, even though they are the ones responsible for policing their own networks. Harmful AI bots continue to get through the defenses of major social media platforms. Even though most have rules against automated manipulation, enforcement is weak and bots exploit the gaps to spread disinformation. Current detection systems and policies aren’t keeping up, and platforms will need stronger measures to address the problem. ... The EU and the US are both moving to address bot-driven disinformation. In the EU, the Digital Services Act obliges large online platforms to assess and mitigate systemic risks such as manipulation, and to provide vetted researchers with access to platform data.


Is the CISO chair becoming a revolving door?

“A CISO is interacting with a lot of interfaces, and you need to have soft skills and communicate well with others. In many cases, you need to drive others to take action, and that’s super tedious. It’s very difficult to keep doing it over time,” Geiger Maor says. “In many cases, you’re in direct conflict with company goals and your goals. You’re like a salmon fish going upstream against everybody else. This makes it very difficult to keep a long tenure.” ... That constant exposure to risk and blame is another reason some CISOs hesitate to take the role in the first place, according to Rona Spiegel, senior manager, security and trust, mergers and acquisitions at Autodesk and former cloud governance leader at Wells Fargo and Cisco. “The bad guys, especially now with AI and automation, they’re getting more sophisticated, and they only have to be right once, but the CISO has to be right all day every day. They only have to be wrong once, and they get blamed … you’re an operational cost centre no matter what because you’re not bringing in revenue, so if something goes wrong … all roads lead to the CISO,” Spiegel says. ... Chapman is also seeing a rise in fractional CISOs, brought in part-time to set up frameworks or oversee specific projects. “It really comes down to the individual,” he says. “Some want that top seat, speaking to the board, communicating risk. But I am also seeing some say, ‘It doesn’t have to be a CISO role.’”


RPA versus hyperautomation: Understanding accuracy (performance) benchmarks in practice

RPA is like that reliable coworker who never complains and does exactly what you ask. It loves repetitive, predictable tasks such as copying and pasting data, moving files between systems or generating standard reports. When everything goes according to plan, RPA is perfect. ... Hyperautomation is the next-level upgrade. It combines RPA with AI, natural language processing (NLP), intelligent document processing (IDP), process mining and workflow orchestration. In simple terms, it doesn’t just follow rules. It learns, adapts and keeps things moving even when the world throws curveballs. With hyperautomation, processes that would have stopped RPA cold continue without a hitch. ... RPA and Hyperautomation are not rivals. They are more like teammates with different strengths. RPA shines when tasks are stable and repetitive, quietly doing its job without fuss. Hyperautomation brings in intelligence, flexibility and the ability to handle entire processes from start to finish. When applied thoughtfully, hyperautomation cuts down on manual corrections, handles exceptions smoothly and delivers value at scale. All this happens without the IT team needing to hire extra coffee runners to fix errors or babysit the robots. The real goal is to build automation that works at the process level, adapts to change and keeps running even when things go off script.


The pros and cons of AI coding in the IT industry

Although now being used by the majority of programmers, AI tools were not universally welcomed upon their launch, and it has taken time to move beyond the initial doubts and suspicion surrounding generative AI. It’s important to note that risks remain when using AI-generated code, which organizations will have to mitigate. “Integrating AI into our coding processes was initially met with skepticism, both within our organization and across the industry,” Jain explains. “Concerns included AI's ability to comprehend complex codebases, the potential for generating buggy code, adherence to company standards, and issues surrounding code and data privacy.” However, since the launch of the first generative AI tools at the end of 2022, Jain says that the rapid evolution of AI technology’s implementation has alleviated many concerns, with features such as codebase indexing and secure training protocols addressing major concerns. “These advancements have enabled AI tools to understand code context, follow company standards, and maintain robust security measures,” Jain tells ITPro. Nevertheless, security and accountability are also major factors for any IT company to consider when looking to use AI as part of the development process, and research continues to show glaring vulnerabilities in AI code. There are certain steps that simply can’t be replaced by AI.


Why AI Is Forcing an Invisible Shift in Risk Management

Without the need for complex, technical coding knowledge, there are increasingly more departments within a business capable of driving and contributing to the development lifecycle, forcing a shift from centralized innovation to development that is fractalized across the entire organization. This shift has been revolutionary, driving more lucrative development by empowering technical teams and business leaders to align on goals and work hand-in-hand. Still, this transition has changed the organization’s relationship with risk. ... In the age of distributed application building, organizations have to raise more questions as it relates to governance and risk, which can mean many different things depending on where the technology sits in the business. Is the application going to be customer-facing? How sensitive is the data? How should it be stored? What are some other privacy considerations? These are all questions businesses must ask in the age of fractured development — and the answers will vary from case to case. ... The shift to decentralized development is not the first change technology has seen, and it’s certainly not the last. The key to staying ahead of the curve is paying attention to the invisible shifts that come with these disruptions, such as the changes that have recently come with the adoption of AI and low code. As these technologies reimagine the typical risk management and compliance model, it’s important for businesses to come to terms with adaptive governance and react as such.


How cross-functional teams rewrite the rules of IT collaboration

When done right, IT isn’t just an optional part of cross-functional collaboration, it’s an integral part of what makes collaboration possible. “There’s a lot of overlap now between IT, sales, finance and regulatory compliance,” says George Dimov, managing owner of Dimov Tax. ... What happens when IT plays a key role in breaking down barriers? First, getting IT involved in cross-functional teams means IT is at the table from day one. Rather than having an environment where a department requests a report or tool from IT after the fact, or has it digitize information later on, IT is present in all meetings. As more organizations recognize the inherent importance of digital transformation, the need for IT expertise — including perspectives from individuals with different types of IT experience — becomes more pronounced. It’s up to the CIO to provide the cross-functional leadership that ensures IT is involved in such efforts from the start. ... Even in situations when IT isn’t directly involved in day-to-day collaboration, it can still play a valuable role by providing technology resources that aid and facilitate collaboration. Ideally, IT should be part of the solution to eliminate barriers, whether that’s through digital sharing tools, reporting mechanisms, or something else. IT can and should be at the forefront of enabling cross-functional collaboration between teams and departments.


Service-as-software: The new control plane for business

Historically, enterprises ran on islands of automation — enterprise resource planning for the back office and, later, a proliferation of apps. Customer relationship management was the first to introduce a new operating model and a new business model. Today, the enterprise itself must begin to operate like a software company. That requires harmonizing those islands into a single unified layer where data and application logic collapse into an integrated System of Intelligence. Agents rely on this harmonized context to make decisions and, when needed, invoke legacy applications to execute workflows. Operating this way also demands a new operations model: a build-to-order assembly line for knowledge work that blends the customization of consulting with the efficiency of high-volume fulfillment. Humans supervise agents, and in doing so progressively encode their expertise into the system. ... The important point to remember is that islands of automation impede management’s core function – planning, resource allocation and orchestration with full visibility across levels of detail and business domains. Data lakes do not solve this by themselves; each star schema is another island. Near-term, organizations can start small and let agents interrogate a single domain (for example, the sales cube) and take limited actions by calling systems of record via MCP servers, for example, viewing a customer’s complaints and initiating a return authorization.


Companies are making the same mistake with AI that Tesla made with robots

Shai Ahrony, CEO of marketing agency Reboot Online, calls this phenomenon the "AI aftershock." "Companies that rushed to cut jobs in the name of AI savings are now facing massive, and often unexpected costs," he told ZDNET. "We've seen customers share examples of AI-generated errors -- like chatbots giving wrong answers, marketing emails misfiring, or content that misrepresents the brand -- and they notice when the human touch is missing." ... Some companies have already learned painful lessons about AI's shortcomings and adjusted course accordingly. In one early example from last year, McDonald's announced that it was retiring an automated order-taking technology that it had developed in partnership with IBM after the AI-powered system's mishaps went viral across social media. ... McDonalds' and Klarna's decisions to backtrack on AI in favor of humans is reminiscent of a similar about-face from Tesla. In 2018, after Tesla failed to meet production quotas for its Model 3, CEO Elon Musk admitted in a tweet that the electric vehicle company's reliance upon "excessive automation…was a mistake." "Humans are underrated," he added. Businesses aggressively pushing to deploy AI-powered customer service initiatives in the present could come to a similar conclusion: that even though the technology helps to cut spending and boost efficiency in some domains, it isn't able to completely replicate the human touch.


How Can the Usage of AI Help Boost DevOps Pipelines

In recent times, AI is playing a key role in CI/CD by using machine learning algorithms and intelligent automation to detect errors proactively, optimize resource usage and faster release cycles. With AI, CI/CD pipelines can learn, adapt and optimize themselves, redefining software development from start to finish. By combining AI and DevOps, you can eliminate silos, recover faster from outages and open up new business revenue streams. Today’s businesses are increasingly leveraging artificial intelligence capabilities throughout their DevOps pipelines to make their CI/CD pipelines intelligent, thereby enabling them to predict problems faster, optimize the pipelines if needed, and recover from failures without the need for any human intervention. ... When you adopt AI into the DevOps practices in your organization, you are applying specific technologies to automate, optimize, and enhance each stage of the software development lifecycle – coding, testing, deployment, and monitoring. Today’s organizations are using AI in their DevOps pipelines to drive innovation, enabling teams to work seamlessly and achieve rapid development and deployment cycles. ... AI can help in DevSecOps in ways such as automating security testing, automating threat detection, and streamlining incident response. You can use AI-powered tools to scan your application source code for security vulnerabilities, automate software patches, automate incident responses, and monitor in real-time to identify anomalies.

Daily Tech Digest - November 21, 2024

Building Resilient Cloud Architectures for Post-Disaster IT Recovery

A resilient cloud architecture is designed to maintain functionality and service quality during disruptive events. These architectures ensure that critical business applications remain accessible, data remains secure, and recovery times are minimized, allowing organizations to maintain operations even under adverse conditions. To achieve resilience, cloud architectures must be built with redundancy, reliability, and scalability in mind. This involves a combination of technologies, strategies, and architectural patterns that, when applied collect ... Cloud-based DRaaS solutions allow organizations to recover critical workloads quickly by replicating environments in a secondary cloud region. This ensures that essential services can be restored promptly in the event of a disruption. Automated backups, on the other hand, ensure that all extracted data is continually saved and stored in a secure environment. Using regular snapshots can also provide rapid restoration points, giving teams the ability to revert systems to a pre-disaster state efficiently. ... Infrastructure as code (IaC) allows for the automated setup and configuration of cloud resources, providing a faster recovery process after an incident. 


Agile Security Sprints: Baking Security into the SDLC

Making agile security sprints effective requires organizations to embrace security as a continuous, collaborative effort. The first step? Integrating security tasks into the product backlog right alongside functional requirements. This approach ensures that security considerations are tackled within the same sprint, allowing teams to address potential vulnerabilities as they arise — not after the fact when they're harder and more expensive to fix. ... By addressing security iteratively, teams can continuously improve their security posture, reducing the risk of vulnerabilities becoming unmanageable. Catching security issues early in the development lifecycle minimizes delays, enabling faster, more secure releases, which is critical in a competitive development landscape. The emphasis on collaboration between development and security teams breaks down silos, fostering a culture of shared responsibility and enhancing the overall security-consciousness of the organization. Quickly addressing security issues is often far more cost-effective than dealing with them post-deployment, making agile security sprints a necessary choice for organizations looking to balance speed with security.


The new paradigm: Architecting the data stack for AI agents

With the semantic layer and historical data-based reinforcement loop in place, organizations can power strong agentic AI systems. However, it’s important to note that building a data stack this way does not mean downplaying the usual best practices. This essentially means that the platform being used should ingest and process data in real-time from all major sources, have systems in place for ensuring the quality/richness of the data and then have robust access, governance and security policies in place to ensure responsible agent use. “Governance, access control, and data quality actually become more important in the age of AI agents. The tools to determine what services have access to what data become the method for ensuring that AI systems behave in compliance with the rules of data privacy. Data quality, meanwhile, determines how well an agent can perform a task,” Naveen Rao, VP of AI at Databricks, told VentureBeat. ... “No agent, no matter how high the quality or impressive the results, should see the light of day if the developers don’t have confidence that only the right people can access the right information/AI capability. This is why we started with the governance layer with Unity Catalog and have built our AI stack on top of that,” Rao emphasized.


Enhancing visibility for better security in multi-cloud and hybrid environments

The number one challenge for infrastructure and cloud security teams is visibility into their overall risk–especially in complex environments like cloud, hybrid cloud, containers, and Kubernetes. Kubernetes is now the tool of choice for orchestrating and running microservices in containers, but it has also been one of the last areas to catch speed from a security perspective, leaving many security teams feeling caught on their heels. This is true even if they have deployed admission control or have other container security measures in place. Teams need a security tool in place that can show them who is accessing their workloads and what is happening in them at any given moment, as these environments have an ephemeral nature to them. A lot of legacy tooling just has not kept up with this demand. The best visibility is achieved with tooling that allows for real-time visibility and real-time detection, not point-in-time snapshotting, which does not keep up with the ever-changing nature of modern cloud environments. To achieve better visibility in the cloud, automate security monitoring and alerting to reduce manual effort and ensure comprehensive coverage. Centralize security data using dashboards or log aggregation tools to consolidate insights from across your cloud platforms.


How Augmented Reality is Shaping EV Development and Design

Traditionally, prototyping has been a costly and time-consuming stage in vehicle development, often requiring multiple physical models and extensive trial and error. AR is disrupting this process by enabling engineers to create and test virtual prototypes before building physical ones. Through immersive visualizations, teams can virtually assess design aspects like fit, function, and aesthetics, streamlining modifications and significantly shortening development cycles. ... One of the key shifts in EV manufacturing is the emphasis on consumer-centric design. EV buyers today expect not just efficiency but also vehicles that reflect their lifestyle choices, from customizable interiors to cutting-edge tech features. AR offers manufacturers a way to directly engage consumers in the design process, offering a virtual showroom experience that enhances the customization journey. ... AR-assisted training is one frontier seeing a lot of adoption. By removing humans from dangerous scenarios while still allowing them to interact with those same scenarios, companies can increase safety while still offering practical training. In one example from Volvo, augmented reality is allowing first responders to assess damage on EV vehicles and proceed with caution.


Digital twins: The key to unlocking end-to-end supply chain growth

Digital twins can be used to model the interaction between physical and digital processes all along the supply chain—from product ideation and manufacturing to warehousing and distribution, from in-store or online purchases to shipping and returns. Thus, digital twins paint a clear picture of an optimal end-to-end supply chain process. What’s more, paired with today’s advances in predictive AI, digital twins can become both predictive and prescriptive. They can predict future scenarios to suggest areas for improvement or growth, ultimately leading to a self-monitoring and self-healing supply chain. In other words, digital twins empower the switch from heuristic-based supply chain management to dynamic and granular optimization, providing a 360-degree view of value and performance leakage. To understand how a self-healing supply chain might work in practice, let’s look at one example: using digital twins, a retailer sets dynamic SKU-level safety stock targets for each fulfillment center that dynamically evolve with localized and seasonal demand patterns. Moreover, this granular optimization is applied not just to inventory management but also to every part of the end-to-end supply chain—from procurement and product design to manufacturing and demand forecasting. 


Illegal Crypto Mining: How Businesses Can Prevent Themselves From Being ‘Cryptojacked’

Business leaders might believe that illegal crypto mining programs pose no risks to their operations. Considering the number of resources most businesses dedicate to cybersecurity, it might seem like a low priority in comparison to other risks. However, the successful deployment of malicious crypto mining software can lead to even more risks for businesses, putting their cybersecurity posture in jeopardy. Malware and other forms of malicious software can drain computing resources, cutting the life expectancy of computer hardware. This can decrease the long-term performance and productivity of all infected computers and devices. Additionally, the large amount of energy required to support the high computing power of crypto mining can drain electricity across the organization. But one of the most severe risks associated with malicious crypto mining software is that it can include other code that exploits existing vulnerabilities. ... While powerful cybersecurity tools are certainly important, there’s no single solution to combat illegal crypto mining. But there are different strategies that business leaders can implement to reduce the likelihood of a breach, and mitigating human error is among the most important. 


10 Most Impactful PAM Use Cases for Enhancing Organizational Security

Security extends beyond internal employees as collaborations with third parties also introduce vulnerabilities. PAM solutions allow you to provide vendors with time-limited, task-specific access to your systems and monitor their activity in real time. With PAM, you can also promptly revoke third-party access when a project is completed, ensuring no dormant accounts remain unattended. Suppose you engage third-party administrators to manage your database. In this case, PAM enables you to restrict their access based on a "need-to-know" basis, track their activities within your systems, and automatically remove their access once they complete the job. ... Reused or weak passwords are easy targets for attackers. Relying on manual password management adds another layer of risk, as it is both tedious and prone to human error. That's where PAM solutions with password management capabilities can make a difference. Such solutions can help you secure passwords throughout their entire lifecycle — from creation and storage to automatic rotation. By handling credentials with such PAM solutions and setting permissions according to user roles, you can make sure all the passwords are accessible only to authorized users. 


The Information Value Chain as a Framework for Tackling Disinformation

The information value chain has three stages: production, distribution, and consumption. Claire Wardle proposed an early version of this framework in 2017. Since then, scholars have suggested tackling disinformation through an economics lens. Using this approach, we can understand production as supply, consumption as demand, and distribution as a marketplace. In so doing, we can single out key stakeholders at each stage and determine how best to engage them to combat disinformation. By seeing disinformation as a commodity, we can better identify and address the underlying motivations ... When it comes to the disinformation marketplace, disinformation experts mostly agree it is appropriate to point the finger at Big Tech. Profit-driven social media platforms have understood for years that our attention is the ultimate gold mine and that inflammatory content is what attracts the most attention. There is, therefore, a direct correlation between how much disinformation circulates on a platform and how much money it makes from advertising. ... To tackle disinformation, we must think like economists, not just like fact-checkers, technologists, or investigators. We must understand the disinformation value chain and identify the actors and their incentives, obstacles, and motivations at each stage.


Why do developers love clean code but hate writing documentation?

In fast-paced development environments, particularly those adopting Agile methodologies, maintaining up-to-date documentation can be challenging. Developers often deprioritize documentation due to tight deadlines and a focus on delivering working code. This leads to informal, hard-to-understand documentation that quickly becomes outdated as the software evolves. Another significant issue is that documentation is frequently viewed as unnecessary overhead. Developers may believe that code should be self-explanatory or that documentation slows down the development process. ... To prevent documentation from becoming a second-class citizen in the software development lifecycle, Ferri-Beneditti argues that documentation needs to be observable, something that can be measured against the KPIs and goals developers and their managers often use when delivering projects. ... By offloading the burden of documentation creation onto AI, developers are free to stay in their flow state, focusing on the tasks they enjoy—building and problem-solving—while still ensuring that the documentation remains comprehensive and up-to-date. Perhaps most importantly, this synergy between GenAI and human developers does not remove human oversight. 



Quote for the day:

"The harder you work for something, the greater you'll feel when you achieve it." -- Unknown