Showing posts with label Business Intelligence. Show all posts
Showing posts with label Business Intelligence. Show all posts

Daily Tech Digest - November 14, 2025


Quote for the day:

"The only way to achieve the impossible is to believe it is possible." -- Charles Kingsleigh



When will browser agents do real work?

Vision-based agents treat the browser as a visual canvas. They look at screenshots, interpret them using multimodal models, and output low-level actions like “click (210,260)” or “type “Peter Pan”.” This mimics how a human would use a computer—reading visible text, locating buttons visually, and clicking where needed. ... DOM-based agents, by contrast, operate directly on the Document Object Model (DOM), the structured tree that defines every webpage. Instead of interpreting pixels, they reason over textual representations of the page: element tags, attributes, ARIA roles, and labels. ... Running a browser agent once successfully doesn’t mean it can repeat the task reliably. The next frontier is learning from exploration: transforming first-time behaviors into reusable automations. A promising strategy starting to be deployed more and more is to let agents explore workflows visually, then encode those paths into structured representations like DOM selectors or code. ... With new large language models excelling at writing and editing code, these agents can self-generate and improve their own scripts, creating a cycle of self-optimization. Over time, the system becomes similar to a skilled worker: slower on the first task, but exponentially faster on repeat executions. This hybrid, self-improving approach—combining vision, structure, and code synthesis—is what makes browser automation increasingly robust. 


Security Degradation in AI-Generated Code: A Threat Vector CISOs Can’t Ignore

LLMs have been a boon for developers since OpenAI’s ChatGPT was publicly released in November 2022, followed by other AI models. Developers were quick to utilize the tools, which significantly increased productivity for overtaxed development teams. However, that productivity boost came with security concerns, such as AI models trained on flawed code from internal or publicly available repositories. Those models introduced vulnerabilities that sometimes spread throughout the entire software ecosystem. One way to address the problem was by using LLMs to make iterative improvements to code-level security during the development process, under the assumption that LLMs, given the job of correcting mistakes, would amend them. The study, however, turns that assumption on its head. Although previous studies (and extensive real-world experience, including our own data) have demonstrated that an LLM can introduce vulnerabilities in the code it generates, this study went a step further, finding that iterative refinement of code can introduce new errors. ... The security degradation introduced in the feedback loop raises troubling questions for developers, tool designers and AI safety researchers. The answer to those questions, the authors write, involves human intervention. Developers, for instance, must maintain control of the development process, viewing AI as a collaborative assistant rather than an autonomous tool.


Are We in the Quantum Decade?

It would be prohibitively expensive even for a Fortune 100 company to own, operate and maintain its own quantum computer. It would require a quantum ecosystem that includes government, academia and industry entities to make it accessible to an enterprise. In most cases, the push and funding could come from the government or through cooperation among nations. Historically, new computing technology was rented and used as a service. Compute resources financed by government were booked in advance. Processing occurred in batches using resource-sharing techniques such as time slicing. Equivalent models are expected for quantum processing. ... The era of quantum computing looms large, but enterprises and IT teams should be thinking about it today. Infrastructure needs to be deployed and algorithms need to be written for executing business use cases. "For several years to come, CIOs may not have much to do with quantum computing. But they need to know what it is, what it can do and how much it costs," said Lawrence Gasman, president of Communications Industry Researchers. "Quantum networks and cybersecurity will become necessary for secure communications by 2030 or even earlier." Quantum computing will not replace classical computing, but data center providers need to be thinking about how they will integrate the two architectures using interconnects like co-packaged optics.


When Data Gravity Meets Disaster Recovery

Data starts to pull everything else toward it: apps, analytics, integrations, even people and processes, the more it aggregates in one place. That environment becomes a tightly woven web of dependencies, over time. While it may be fine for day-to-day operations, it becomes a nightmare when something breaks. At that point, DR turns into a delicate task of relocating an entire ecosystem, not just a matter of simply copying files. You have to think about relationships, which systems rely on which datasets, how permissions are mapped, and how applications expect to find what they need. Of course, the bigger that web gets, the heavier the “gravitational field.” Moving petabytes of interconnected data across regions or clouds isn’t fast or easy. It takes time, bandwidth, and planning, and every extra gigabyte adds friction – in other words, the more gravity your data has, the harder it is to recover from disaster quickly. ... To push back against gravity, organizations are rethinking their architectures. Instead of forcing all data into one environment, they’re distributing it intelligently, keeping mission-critical workloads close to where they’re created, while replicating copies to nearby or complementary environments for protection. Hybrid and multi-cloud DR strategies have become the go-to solution for this. They blend the best of both worlds: the low-latency performance of local infrastructure with the flexibility and geographic reach of cloud storage. 


What’s Driving the EU’s AI Act Shake-Up?

The move to revise the AI Act follows sustained lobbying from US tech giants. In October, the Computer and Communications Industry Association (CCIA), whose members include Apple, Meta, and Amazon, launched a campaign pushing for simplification not only of the AI Act but of the EU’s entire digital rulebook. Meanwhile, EU officials have reportedly engaged with the Trump administration on these issues. ... The potential delay reflects pressure from national authorities. Denmark and Germany have both pushed for a one-year extension. A spokesperson from Germany’s Federal Ministry for Digital Transformation and Government Modernization said that a delay “would allow sufficient time for the practical application of European standards by AI providers, with standards still currently being elaborated.” ... Another major reform under consideration is expanding and centralizing oversight powers within the Commission’s AI Office. Currently responsible for general-purpose AI models (GPAI), the office would gain new authority to oversee all AI systems based on GPAI and conduct conformity assessments for certain high-risk systems. The Commission would also gain new authority to perform conformity assessments for certain high-risk systems and supervise online services deemed to pose “systemic risk” under the Digital Services Act. This would shift more power to Brussels and expand the mandate of the Commission’s AI Office beyond its current role supervising GPAI.


BITS & BYTES : The Foundational Lens for Enterprise Transformation

BITS serves as high-level strategic governance—ensuring balanced maturity assessments across business alignment, information-centric decision-making, technology enablement, and security resilience—while leveraging BDAT’s detailed sub-domains (layers and components) for tactical implementation and operational oversight. This allows organizations to maintain BDAT’s precision in decomposing complex IT landscapes (e.g., mapping specific data architectures or application portfolios) within BITS’s overarching pillars, fostering adaptive governance that scales from atomic “bits” of change to enterprise-wide transformations ... If BITS defines what must be managed, BYTES (Balanced Yearly Transformation to Enhance Services) define how change must be processed. BYTES is more than a set of principles; it is a derivative of the core architectural lifecycle: Plan (Balanced Yearly), Design& Build (Transformation Enhancing) , and Run (Services). Each component of BYTES directly maps to the mandatory stages of a continuous transformation framework, enabling architects to manage change at its source. ... The BITS & BYTES framework is not intended to replace existing architecture frameworks (e.g., TOGAF, Zachman, DAMA, IT4IT, SAFe). Instead, it acts as a meta-framework—a simplified, high-level matrix that accommodates and contextualizes the applicability of all existing models. 


Unlocking GenAI and Cloud Effectiveness With Intelligent Archiving

Unlike tiering, which functions like a permanent librarian selectively fetching individual files from deep storage, true archiving is a one-time event that moves files based on defined policies, such as last access or modification date. Once archived, files are stored on a long-term platform and remain accessible without reliance on any intermediary system or application. In this context, one of the main challenges is that most enterprise data is unstructured, including everything from images and videos to emails and social media content. Collectively, these vast and diverse data lakes present a formidable management challenge, and without rigorous control, organizations risk falling victim to the classic “garbage in, garbage out” problem. ... Modern archiving technologies that connect directly to both primary and archive storage platforms eliminate the need for a middleman, drastically improving migration speed, accuracy, and long-term data accessibility. This means organizations can migrate only what’s necessary, ensuring high-value data is cloud-ready while offloading cold data to cost-efficient archival platforms. This not only reduces cloud storage costs but also supports the adoption of cloud-native formats, enabling greater scalability and performance for active workloads. ... For modern enterprises, where more than 60% of enterprise data is typically inactive and often goes untouched for years, organizations are still consuming high-performance (and high-cost) storage.


Why 60% of BI Initiatives Fail (and How Enterprises Can Avoid It)

Many BI projects fail because goals and outcomes aren’t clearly defined. While enterprises may be confident that they understand BI gaps, often their goals are vague, lacking proper detailing and no internal consensus. ... Poor project management practices, vague processes, and changing responsibilities create even more confusion. In many failed BI projects, BI is viewed as “just another IT initiative,” whereas it should be treated as part of a business transformation program. Without active sponsorship and accountability, the technology may be delivered, but its adoption and impact suffer. ... Agile and iterative methods are often preferred since they are effective for BI. Whereas, the waterfall method is not recommended for BI projects since it lacks the necessary agility to adapt to changing requirements, iterative data exploration, and continuous business feedback. Under the waterfall approach, the users are engaged only in the beginning of the project and during the end, which leaves gaps for development or data analysis incase of issues. ... A system is only as good as the users who use it; research has shown that 55% of users lack confidence in BI tools due to insufficient training. Enterprises often expend considerable resources on deployment, but neglect enablement. If employees can’t find how to navigate dashboards, understand the data quality, data visualizations, or use insights to make daily decisions, the adoption rates suffer.


Authentication in the age of AI spoofing

Unlike traditional malware, which may find its way into networks through a compromised software update or downloads, AI-powered threats utilize machine learning to analyze how employees authenticate themselves to access networks, including when they log in, from which devices, typing patterns and even mouse movements. The AI learns to mimic legitimate behavior while collecting login credentials and is ultimately deployed to evade basic detection. ... Beyond the statistics, AI’s effectiveness is driven by its exponentially improving abilities to social engineer humans — replicating writing style, voice cadence, facial expressions or speech with subtle nuance and adding realistic context by scanning social media and other publicly available references. The data is striking and reflects the crucial need for a multi-layer approach to help sidestep the exponentially escalating ability for AI to trick humans. ... Cryptographic protection complements biometric authentication, which verifies “Is this the right person?” at the device level, while passkeys are used to verify “Is this the right website or service?” at the network level. Multi-modal biometrics, such as facial recognition plus fingerprint scanning or biometrics plus behavioral patterns, further strengthen this approach. As AI-powered attacks make credential theft and impersonation attacks more sophisticated, the only sustainable line of defense is a form of authentication that cannot be tricked or must be cryptographically verified. 


Why your security strategy is failing before it even starts

The biggest mistake I see among organizations is initiating cybersecurity efforts with technology rather than prioritizing risk and business alignment. Cybersecurity is often mischaracterized as a technical issue, when in reality it’s a business risk management function. Failure to establish this connection early often results in fragmented decision-making and limited executive engagement. Effective cybersecurity strategies should be embedded into business objectives from the outset. This requires identifying the business’s critical assets, assessing potential threats and motivations, and evaluating the impact of assets becoming compromised. Too often, CISOs jump straight into acquiring cybersecurity tools without addressing these questions. ... First, the threat landscape shifted dramatically. Cybersecurity attacks today target OT and ICS. In food manufacturing, those systems run production lines, refrigeration, and safety processes. A cyber incident in these areas extends beyond data loss, it can disrupt production and even compromise food safety, introducing a far more complex level of risk. Second, it became evident to me that cybersecurity cannot operate in isolation. It must support and enable business operations and growth. Today, my approach is risk-based and aligned with our business prioritizes, while still built on zero trust principles. We focus on resilience, not just compliance, and OT security is a core pillar of that strategy. 

Daily Tech Digest - October 12, 2025


Quote for the day:

"Trust because you are willing to accept the risk, not because it's safe or certain." -- Anonymous



AI and Data Governance: The Power Duo Reshaping Business Intelligence

Fortunately, the relationship between AI and data governance isn’t one-sided. By leveraging automation, pattern recognition, and real-time analytics, AI enables organizations to manage data quality, compliance, and security more effectively. AI models can identify inaccuracies or inconsistencies, flag anomalies, and automatically correct missing or duplicate records, minimizing the risk of generating misleading results from poor-quality datasets. It can track organizational data in real time, ensuring accurate classification of sensitive information, enforcing access controls, and proactively identifying policy violations before they escalate. This approach enables organizations to move away from manual auditing and adopt automated, self-correcting governance workflows. ... To leverage the full potential of the relationship between AI and governance, organizations must establish a continuous feedback loop between their governance frameworks and AI systems. AI shouldn’t function independently; it must be constantly updated and aligned with governance policies to maintain accuracy, transparency, and compliance. One of the best ways to achieve this is by using intelligent data platforms such as Semarchy’s master data management (MDM) and data catalog solutions. These solutions unify and control AI data from a trusted, single source of truth, ensuring consistency across business functions.


Building cyber resilience in a volatile world

Supply chain attacks show just how fragile the ecosystem can be, given that when one link breaks, the shockwaves ripple across agencies and sectors. That’s why the shift away from outmoded ideas of “prevention” by building walls around environments to a new kind of resilience is so stark. For example, zero trust is no longer optional; it’s the baseline. Verification must be constant, and assumptions about “safe” internal networks belong in the past. Meanwhile, AI governance and quantum-resistant cryptography have jumped from academic conversations to immediate government standards. Institutional muscle is being flexed too.  ... The transformation ahead is as much cultural as technical. Agencies must shift from being static defenders to dynamic operators, and need to be ready to adapt, recover, and press on even as attacks intensify. Cybersecurity is not just another line item in the IT budget, but rather the backbone of national resilience. The ability to keep delivering services, protect citizen trust, and safeguard critical infrastructure is now inseparable from how well agencies manage cyber risk. Resilience is not built by chance. It’s built through strategy, investment, and relentless partnership. It means turning frameworks into live capability, leveraging industry expertise, and embedding a mindset that sees cyber not as a constraint but as a foundation for confidence and continuity.


Fighting Disinformation Demands Confronting Social and Economic Drivers

Moving beyond security theater requires embracing ideological critique as a foundational methodology for information integrity policy research. This means shifting from “how do we stop misinformation?” to “what material and symbolic interests does information serve, and how do power relations shape what counts as legitimate knowledge?” This approach demands examining not just false information, but the entire apparatus through which beliefs become hegemonic, others verboten. Ideological critique offers three analytical tools absent from current information integrity policy research. First, it provides established scholarly techniques for examining how seemingly neutral technical systems encode worldviews and serve specific class interests. Platform algorithms, content moderation policies, and fact-checking systems all embed assumptions about authority, truth, and social order that more often than not favor existing power arrangements. Second, it offers frameworks for understanding how dominant groups maintain cognitive hegemony: the ability to shape not just what people think, but how they think. Third, it provides tools for analyzing how groups develop counter-hegemonic consciousness, alternative meaning-making systems and their ‘hidden transcripts’. Adopting these techniques can craft better policy responses to disinformation.


Cloud Infrastructure Isn't Dead, It's Just Becoming Invisible

Let's be honest: most cloud platforms are more alike than different. Storage, compute, and networking are commoditized. APIs are standard. Reliability and scalability is expected. Most agree that the cloud itself is no longer a differentiator, it's a utility. That's why the value is moving up the stack. Engineers don't need more IaaS, they need better ways to work with it. They want file systems that feel local, even when they're remote. They want zero-copy collaboration and speed. And they want all of that without worrying about provisioning, syncing, or latency. Today, cloud users are shifting their expectations toward solutions that utilize standard infrastructure such as object storage and virtual servers, yet abstract away the complexity. The appeal is in performance and usability improvements that make infrastructure feel invisible. ... What makes this shift important is that it's rooted in practical need. When you're working with terabytes or petabytes of high-resolution video, training a model on noisy real-world data, or collaborating across time zones on a shared dataset, traditional cloud workflows break down. Downloading files locally isn't scalable, and copying data between environments wastes time and resources. Latency is a momentum killer. This is where invisible infrastructure shines. It doesn't just abstract the cloud, it makes it better suited to the way developers actually build and collaborate today.


The great misalignment in business transformation

It’s easy to point the finger at artificial intelligence (AI) for today’s disruption in the tech workforce. After all, AI is changing how coding, analysis and even project management are done. Entire categories of tasks are being automated. Advocates argue that workers will inevitably be replaced, while critics frame it as the next wave of technological unemployment. Recent surveys have shown that employee optimism is fading. ... The problem is compounded by the emphasis on being “more artistic” or “more technical.” Both approaches miss the mark. Neither artistry for its own sake nor hyper-technical detail guarantees relevance if business problems remain unsolved. The technology industry has always experienced cycles of boom and bust. From the dot-com bubble to the recent AI surge, waves of hiring and layoffs are nothing new. What is new, however, is the growing realization that some jobs may not need to come back at all. ... Analysis without insight devolves into repetitive reporting, adding noise rather than clarity. Creativity without business grounding drifts into theatre, producing workshops and “innovation sessions” that inspire but fail to deliver results. Both are missing the target. Worse still, companies have proven they can operate without many of these roles altogether. The lesson is clear: being more artistic or more technical is not the answer. 


The Architecture Repository: Turning Enterprise Architecture into a Strategic Asset

While the Enterprise Continuum provides the context — a spectrum from generic to organization-specific models — the Architecture Repository provides the structure to store, manage, and evolve those models. ... At the heart of the repository lies the Architecture Metamodel. This is the blueprint for how architectural content is structured, related, and interpreted. It defines the vocabulary, relationships, and rules that govern the creation and classification of artifacts. The metamodel ensures consistency across the repository. Whether you’re modeling business processes, application components, or data flows, the metamodel provides a common language and structure. It’s the foundation for traceability, reuse, and integration. In practice, the metamodel is tailored to the organization’s needs. It reflects the enterprise’s modeling standards, governance policies, and stakeholder requirements. It’s not just a technical artifact — it’s a strategic enabler of clarity and coherence. ... Architecture must respond to real needs. The Architecture Requirements Repository captures all authorized requirements — business drivers, stakeholder concerns, and regulatory mandates — that guide architectural development. ... Architecture is not just about models — it’s about solutions. The Solutions Landscape presents the architectural representation of Solution Building Blocks (SBBs) that support the Architecture Landscape.


Cyberpsychology’s Influence on Modern Computing

Psychological research on decision making and cognitive processes has been fundamental to understanding perceptions and behavior in the areas of cybersecurity and cyberprivacy. Much of this work focuses on cognitive biases and emotional states, which inform the actions of both users and attackers. ... Both cognition and affect play a role in these phenomena. Specifically, under conditions of diminished information processing—such as in the case of cognitive demands or affective experiences such as a positive mood state—people are less likely to make decisions based on strongly held beliefs. For example, a consumer’s positive emotional state, such as happiness with the Internet, mediates the negative effects of information-collection concerns on their willingness to disclose personal information. Interestingly, cybersecurity experts are as vulnerable to phishing and social engineering attacks as those who are not cybersecurity experts. A deep understanding of the perceptual, cognitive, and emotional mechanisms that result in lapses of judgment or even behavior incongruent with one’s intellectual understanding is vital to minimizing such threats. In addition to cognitive and emotional states, personality models have provided insight into human behavior vis á vis technology. The “big five” personality theory, also known as the five-factor model, is a widely accepted framework that has been applied to a broad range of cyber-related behaviors, including cybersecurity.


The Cybersecurity Skills Gap and the Role of Diversity

Cybersecurity is often presented as a technically demanding field, she points out. “This further discourages some women from first entering the industry. For those who have, it’s then about being able to continue growing their careers when they may feel challenged by perceived technical demands,” says Pinkard. And today, cybersecurity is not a purely technical subject. Demand for technical skills will always exist, but the job has changed, says Amanda Finch, CEO, The Chartered Institute for Information Security. ... While the low number of women in cybersecurity is concerning, it’s also important to consider how other types of diversity can help fill the skills gap in the workforce. Inclusion and opportunity is “100% about more than just bringing in more women”: “It's about the different life perspective,” says Pinkard. Those “lived perspectives” are driven by areas such as neurodiversity, ethnic diversity and physical ability diversity, she says. ... Too many companies still treat diversity as a compliance exercise, says Mullins. “When it was no longer a legal requirement in the US, many simply stopped. Others will say, ‘we want more women’, but won’t update their maternity policies and complain that only men apply to their roles. Or they say ‘we want neurodiverse talent’, but resist implementing more flexible working policies to facilitate them.” 


Data quality is no longer optional

AI systems can only be as good as the data that feeds them. When information is incomplete, inconsistent or trapped in silos, the insights and predictions those systems produce become unreliable. The risk is not just missed opportunities but strategic missteps that erode customer trust and competitive positioning. ... Companies with a strong digital foundation are already ahead in AI adoption, and those without risk drowning in information while starving their AI models of the clean, reliable inputs they need. But before any organisation can realise AI’s full potential, it must first build a resilient data foundation, and the enterprises that place data quality at the heart of their digital strategy are already seeing measurable gains. By investing in robust governance, integrating AI with data management and removing silos across departments, they create connected teams and more agile operations. ... Raising data quality is not a one-off exercise; it requires a cultural shift that calls for collaboration across IT, operations and business units. Leaders must set clear standards for how data is captured, cleaned and maintained, and champion the idea that every employee is a steward of data integrity. The long-term challenge is to design data architectures that can support scale and complexity and embrace distributed paradigms that support interoperability. These architectures do more than maintain order. 


Shadow AI in Your Systems: How to Detect and Control It

"Shadow AI" is when people in an organization use AI tools like generative models, coding assistants, agentic bots, or third-party LLM services without getting permission from IT or cybersecurity. This is the next step in the evolution of "shadow IT," but the stakes are higher because models can read sensitive text, make API calls on their own, and do automated tasks across systems. Industry definitions and primers say that shadow AI happens when employees use AI apps without official supervision, which can lead to data leaks, privacy issues, and compliance problems. ... Agents that automate web interactions usually need credentials, API keys, or tokens to do things for employees. Agents can get into systems directly if keys are poorly managed or embedded in scripts. ... Queries are outbound traffic to known AI provider endpoints, nonstandard hostname patterns, or unusual POST bodies. Modern proxy and firewall logs often show ULRs and headers that show which model vendors are being used. Check your web gateway and proxy logs for spikes in API calls and endpoints that you don't know about. ... Agents often do a lot of navigations, clicks, and form submissions in a short amount of time, which is different from how people do it. Look for patterns in how people navigate, intervals that are always the same, or pages that are crawled in tight loops.

Daily Tech Digest - April 04, 2025


Quote for the day:

“Going into business for yourself, becoming an entrepreneur, is the modern-day equivalent of pioneering on the old frontier.” -- Paula Nelson



Hyperlight Wasm points to the future of serverless

WebAssembly support significantly expands the range of supported languages for Hyperlight, ensuring that compiled languages as well as interpreted ones like JavaScript can be run on a micro VM. Your image does get more complex here, as you need to bundle an additional runtime in the Hyperlight image, along with writing code that loads both runtime and application as part of the launch process. ... There’s a lot of work going on in the WebAssembly community to define a specification for a component model. This is intended to be a way to share binaries and libraries, allowing code to interoperate easily. The Hyperlight Wasm tool offers the option of compiling a development branch with support for WebAssembly Components, though it’s not quite ready for prime time. In practice, this will likely be the basis for any final build of the platform, as the specification is being driven by the main WebAssembly platforms. One point that Microsoft makes is that Wasm isn’t only language-independent, it’s architecture-independent, working against a minimal virtual machine. So, code written and developed on an x64 architecture system will run on Arm64 and vice versa, ensuring portability and allowing service providers to move applications to any spare capacity, no matter the host virtual machine.


Beyond SIEM: Embracing unified XDR for smarter security

Implementing SIEM solutions can have challenges and has to be managed proactively. Configuring the SIEM system can be very complex where any error can lead to false positives or missed threats. Integrating SIEM tools with existing security tools and systems is not easy. The implementation and maintenance processes are also resource-intensive and require significant time and manpower. Alert fatigue can be set with traditional SIEM platforms where numerous alerts are generated making it rather difficult to identify the genuine ones. ... For industries with stringent compliance requirements, such as finance and healthcare, SIEM remains a necessity due to its log retention, compliance reporting, and event correlation capabilities. Microsoft Sentinel’s AI-driven analytics help security teams fine-tune alerts, reducing false positives and increasing threat detection accuracy. Microsoft Defender XDR platform offers, Unified visibility across attack surfaces, CTEM Exposure management solution, CIS framework assessment, Zero Trust, EASM, AI-driven automated response to threats, Integrated security across all Microsoft 365 and third-party platforms, Office, Email, Data, CASB, Endpoint, Identity, and Reduced complexity by eliminating the need for custom configurations. 


Compliance Without Chaos: Build Resilient Digital Operations

A unified platform makes service ownership a no-brainer by directly connecting critical services to the right responders so there’s no scrambling when things go sideways. Teams can set up services quickly and at scale, making it easier to get a real-time pulse on system health and see just how far the damage spreads when something breaks. Instead of chasing down data across a dozen monitoring tools, everything is centralized in one place for easy analysis. ... With all data centralized in a unified platform, the classification and reporting of incidents is far easier with accessible and detailed incident logs that provide a clear audit trail. Sophisticated platforms also integrate with IT service management (ITSM) and IT operations (ITOps) tools to simplify the reporting of incidents based on predefined criteria. ... Every incident, both real and simulated, should be viewed as a learning opportunity. Aggregating data from disparate tools into a single location gives teams a full picture of how their organization’s operations have been affected and supplies a narrative for reporting. Teams can then uncover patterns across tools, teams and time to drive continuous learning in post-incident reviews. Coupled with regular, automated testing of disaster recovery runbooks, teams can build greater confidence in their system’s resilience.


How Organizations Can Benefit From Intelligent Data Infra

The first is getting your enterprise data AI-ready. Predictive AI has been around for a long time. But teams still spend a significant amount of time identifying and cleaning data, which involves handling ETL pipelines, transformations and loading data into data lakes. This is the most expensive step. The same process applies to unstructured data in generative AI. But organizations still need to identify the files and object streams that need to be a part of the training datasets. Organizations need to securely bring them together and load them into feature stores. That's our approach to data management. ... There's a lot of intelligence tied to files and objects. Without that, they will continue to be seen as simple storage entities. With embedded intelligence, you get detection capabilities that let you see what's inside a file and when it was last modified. For instance, if you create embeddings from a PDF file and vectorize them, imagine doing the same for millions of files, which is typical in AI training. This consumes significant computing resources. You don't want to spend compute resources while recreating embeddings on a million files every time there is a modification to the files. Metadata allows us to track changes and only reprocess the files that have been modified. This differential approach optimizes compute cycles.


Tariff war throws building of data centers into disarray

The potentially biggest variable affecting data center strategy is timing. Depending on the size of an enterprise data center and its purpose, it could take as little as six months to build, or as much as three years. Planning for a location is daunting when ever-changing tariffs and retaliatory tariffs could send costs soaring. Another critical element is knowing when those tariffs will take effect, a data point that has also been changing. Some enterprises are trying to sidestep the tariff issues by purchasing components in bulk, in enough quantities to potentially last a few years. ... “It’s not only space, available energy, cooling, and water resources, but it’s also a question of proximity to where the services are going to be used,” Nguyen said. Finding data center personnel, Nguyen said, is becoming less of an issue, thanks to the efficiencies gained through automation. “The level of automation available means that although personnel costs can be a bit more [in different countries], the efficiencies used means that [hiring people] won’t be the drag that it used to be,” he said. Given the vast amount of uncertainty, enterprise IT leaders wrestling with data center plans have some difficult decisions to make, mostly because they will have to guess where the tariff wars will be many months or years in the future, a virtually impossible task.


The Modern Data Architecture: Unlocking Your Data's Full Potential

If the Data Cloud is your engine, the CDP is your steering wheel—directing that power where it needs to go, precisely when it needs to get there. True real-time CDPs have the ability to transform raw data into immediate action across your entire technology ecosystem, with an event-based architecture that responds to customer signals in milliseconds rather than minutes. This ensures you can dynamically personalize experiences as they unfold—whether during a website visit, mobile app session, or contact center interaction–all while honoring consent. ... As AI capabilities evolve, this Intelligence Layer becomes increasingly autonomous—not just providing recommendations but taking appropriate actions based on pre-defined business rules and learning from outcomes to continuously improve its performance. ... The Modern Data Architecture serves as the foundation for truly intelligent customer experiences by making AI implementations both powerful and practical. By providing clean, unified data at scale, these architectures enable AI systems to generate more accurate predictions, more relevant recommendations, and more natural conversational experiences. Rather than creating isolated AI use cases, forward-thinking organizations are embedding intelligence throughout the customer journey. 


Why AI therapists could further isolate vulnerable patients instead of easing suffering

While chatbots can be programmed to provide some personalised advice, they may not be able to adapt as effectively as a human therapist can. Human therapists tailor their approach to the unique needs and experiences of each person. Chatbots rely on algorithms to interpret user input, but miscommunication can happen due to nuances in language or context. For example, chatbots may struggle to recognise or appropriately respond to cultural differences, which are an important aspect of therapy. A lack of cultural competence in a chatbot could alienate and even harm users from different backgrounds. So while chatbot therapists can be a helpful supplement to traditional therapy, they are not a complete replacement, especially when it comes to more serious mental health needs. ... The talking cure in psychotherapy is a process of fostering human potential for greater self-awareness and personal growth. These apps will never be able to replace the therapeutic relationship developed as part of human psychotherapy. Rather, there’s a risk that these apps could limit users’ connections with other humans, potentially exacerbating the suffering of those with mental health issues – the opposite of what psychotherapy intends to achieve.


Breaking Barriers in Conversational BI/AI with a Semantic Layer

The push for conversational BI was met with adoption inertia. Two major challenges have hindered its potential—the accuracy of the data insights and the speed at which the interface could provide the answers that were sought. This can be attributed to the inherent complexity of data architecture, which involves fragmented data in disparate systems with varying definitions, formats, and contexts. Without a unified structure, even the most advanced AI models risk delivering contextually irrelevant, inconsistent, or inaccurate results. Moreover, traditional data pipelines are not designed for instantaneous query resolution and resolving data from multiple tables, which delays responses. ... Large language models (LLMs) like GPT excel at interpreting natural language but lack the domain-specific knowledge of a data set. A semantic layer can resolve this challenge by acting as an intermediary between raw data and the conversational interface. It unifies data into a consistent, context-aware model that is comprehensible to both humans and machines. Retrieval-augmented generation (RAG) techniques are employed to combine the generative power of LLMs with the retrieval capabilities of structured data systems. 


The rise of AI PCs: How businesses are reshaping their tech to keep up

Companies are discovering that if they want to take full advantage of AI and run models locally, they need to upgrade their employees' laptops. This realization has introduced a hardware revolution, with the desire to update tech shifting from an afterthought to a priority and attracting significant investment from companies. ... running models locally gives organizations more control over their information and reduces reliance on third-party services. That setup is crucial for companies in financial services, healthcare, and other industries where privacy is a big concern or a regulatory requirement. "For them, on-device AI computer, it's not a nice to have; it's a need to have for fiduciary and HIPAA reasons, respectively," said Mike Bechtel, managing director and the chief futurist at Deloitte Consulting LLP. Another advantage is that local running reduces lag and creates a smoother user experience, which is especially valuable for optimizing business applications. ... As more companies get in on the action and AI-capable computers become ubiquitous, the premium price of AI PCs will continue to drop. Furthermore, Flower said the potential gains in performance offset any price differences. "In those high-value professions, the productivity gain is so significant that whatever small premium you're paying for that AI-enhanced device, the payback will be nearly immediate," said Flower.


Many CIOs operate within a culture of fear

The culture of fear often stems from a few roots, including a lack of accountability from employees who don’t understand their roles, and mistrust of coworkers and management, says Alex Yarotsky, CTO at Hubstaff, vendor of a time tracking and workforce management tool. In both cases, company leadership is to blame. Good leaders create a positive culture laid out in a set of rules and guidelines for employees to follow, and then model those actions themselves, Yarotsky says. “Any case of misunderstanding or miscommunication is always on the management because the management is the force in the company that sets the rules and drives the culture,” he adds. ... Such a culture often starts at the top, says Jack Allen, CEO and chief Salesforce architect at ITequality, a Salesforce consulting firm. Allen experienced this scenario in the early days of building a career, suggesting the problems may be bigger than the survey respondents indicate. “If the leader is unwilling to admit mistakes or punishes mistakes in an unfair way, then the next layer of leadership will be afraid to admit mistakes as well,” Allen says. ... Cultivating a culture of fear leads to several problems, including an inability to learn from mistakes, Mort says. “Organizations that do the best are those that value learning and highlight incidents as valuable learning events,” he says.

Daily Tech Digest - April 29, 2024

The end of vendor-backed open source?

Considering what’s happened over the last few years, I hope we are witnessing the end of single-vendor-backed open-source projects. Many of these companies see open source as a vehicle to drive the adoption of their software and then switch to a prohibitive license once the software starts to reach massive adoption through a cloud provider. This has happened more times in recent years than I care to count. There are only a few companies that can reliably provide open-source software without hurting their own bottom line. These are the software giants like AWS, Google, and Microsoft. They can comfortably contribute to open-source software and collaborate with companies who specialize in some of the more niche use cases. They may not have been able to cover these niches on their own, but with the help of the community they can. And at the end of the day, even the smaller companies who provide managed offerings need somewhere to host them, right? I see a bright future for Valkey, the open-source fork of Redis. It’s starting off right in the Linux Foundation, where companies contribute freely without fear of a single company dictating the direction of a project.


The Rise of Augmented Analytics: Combining AI with BI for Enhanced Data Insights

Traditionally, data analysis has been the domain of highly skilled data scientists and BI experts. Augmented analytics changes this dynamic by empowering non-technical users to participate actively in the insight-generation process. Augmented analytics automates complex tasks and presents information in a user-friendly format so that employees across departments can easily access, explore, and derive insights from their organization’s data. This will both streamline and simplify business processes – no more worrying about how many business accounts you should have, whether or not your financing option offers optimal interest rates, or if adopting new standards of technologies is worth it in the long run. The relevant teams can easily use data to find the right information without waiting for an expert to analyze it. ... One of the most significant challenges faced by businesses is the shortage of data science expertise. Augmented analytics helps alleviate this bottleneck by reducing the reliance on specialized data professionals.


The 4 Types Of Generative AI Transforming Our World

Generative Adversarial Networks (GANs) emerged in 2014 and quickly became one of the most effective models for generating synthetic content, both text and images. The basic principle involves pitting two different algorithms against each other. One is known as the ‘generator,’ and the other is known as the ‘discriminator,’ and both are given the task of getting better and better at out-foxing each other. ... Neural Radiance Fields (NeRFs) are the newest technology covered here, only emerging onto the scene in 2020. Unlike the other generative technologies, they are specifically used to create representations of 3D objects using deep learning. This means creating an aspect of an image that can't be seen by the ‘camera’ ... One of the latest advancements in the field of generative AI is the development of hybrid models, which combine various techniques to create innovative content generation systems. These models draw on the strengths of different approaches, such as blending the adversarial training of Generative Adversarial Networks (GANs) with the iterative denoising of diffusion models to produce more refined and realistic outputs.


Closing the cybersecurity skills gap with upskilling programs

The power of upskilling is undeniable but putting it into practice with employees is the real challenge. The top reason organizations struggle to implement successful upskilling programs has not changed in the last three years of this study: lack of time. Despite clear barriers, organizations can unlock upskilling engagement with a culture of continuous learning. The first step is to identify existing skills in order to see the gaps. Leaders need to stay engaged in this process: only 33% of executives completely understand the skills their IT teams need and 68% of technologists say leadership at their organization is not aware of a tech skills gap. When it comes to discovering what drives employees to upskill, a new #1 motivation to participate emerged this year: stronger job security and improved confidence. This is yet another proof point in upskilling contributing to higher employee engagement that can drive performance and innovation. With 78% of organizations abandoning projects partway through because of not having enough employees with the right tech skills, there is no time to waste in closing the gap.


Are Enterprises Overconfident About Cybersecurity Readiness?

Comparing statistics for global companies and their overall state of readiness, between the years 2022-2023 and 2023-2024, shows certain surprising changes. The number of Mature companies dropped to 3% this year from 15% in the previous year. The number of Progressive companies dropped by 4% this year; however, the number of Formative companies increased by 13%. In addition, more Beginners were included in the index this year. Raymond Janse van Rensburg, VP, specialists and solutions engineering, APJC, Cisco, said this should not be considered as a static comparison, as the benchmark was done with “a lot more capabilities” this year. "Digitization continues to accelerate around us. We see that there is pressure on organizations to transform. But with that, the attack surface is changing, and there is a lot of technology advancement that is taking place as well," said van Rensburg. "We need to keep pace with change, with the transformation of the organizations, and also to keep pace with change for cybersecurity and the capabilities we bring in across the industry to provide the necessary protections."


Risk orchestration: Addressing the Misconceptions – Hub Platform vs True Orchestration Performance

A true risk orchestration platform synchronises workflows and data and enables customisable journeys. Waterfalling reduces friction and saves costs, ensuring appropriate checks are only applied as necessary. Genuine customers are fast-tracked, while others are routed for additional due diligence. The technology unites complex systems and processes to create a single view of customer risk, a unified risk score and automated decision making. ... In true risk orchestration, all of these processes are seamlessly synchronised to deliver a single view of customer risk. Informed decision making is made simpler, smoother, more accurate and efficient. The impact of this is felt not only in customer satisfaction, which can be vastly enhanced by the speed at which individuals are authenticated and served, but in the overall effectiveness of compliance processes too, in terms of how much time and resource is spent. To that end, there’s a genuine bottom-line benefit to be realised through effective risk orchestration of your compliance processes. 


If Software Quality Is Everybody’s Responsibility, So Is Failure

Quality failures tend to stem from broader organizational and leadership failures. Scapegoating testers for systemic issues is counterproductive. It obscures the real problems and stands in the way of meaningful solutions to quality failings. ... To build ownership, organizations need to shift testing upstream, integrating it earlier into requirements planning, design reviews, and development processes. It also requires modernizing the testing practice itself, utilizing the full range of innovation available: from test automation, shift-left testing, and service virtualization, to risk-based test case generation, modeling, and generative AI. With a shared understanding of who owns quality, governance policies can better balance competing demands around cost, schedule, capabilities, and release readiness. Testing insights will inform smarter tradeoffs, avoiding quality failures and the finger-pointing that today follows them. This future state reduces the likelihood of failures — but also acknowledges that some failures will still occur despite best efforts. In these cases, organizations must have a governance model to transparently identify root causes across teams, learn from them, and prevent recurrence.


Navigating personal liability: post data-breach recommendations for CISOs

The single most important guidance that you should follow is to involve counsel promptly and frequently. Upon your first knowledge of a claimed data breach, you should promptly determine who your organization’s legal counsel is for such matters and contact those lawyers. ... It can be tempting for CSOs and CISOs to take the reins in data breach incidents, given their technical expertise or sense of personal responsibilities. However, this can lead to unintended legal complications. In the aftermath of a data breach, it’s critical to let your organization’s legal counsel guide decision-making processes. They can ensure that the response to the data breach complies with applicable laws and that both communication and remediation efforts are handled appropriately to minimize potential liability. ... Although it’s rare to face personal liability or criminal charges, there can be situations where it could be a real or feared risk. Independent legal advice can provide guidance tailored to your specific situation, to identify where your interests may be different from those of your organization, to allay your concerns, all of which can be protected under attorney-client privilege.


Change Healthcare Admits to Making Ransom Payment to Protect Patient Data, Discloses That Hacker Broke in Days Before Attack

Even though a ransom payment was made, the attack was nevertheless devastating to the processing of health insurance claims and payment information across the country. The UnitedHealth Group update indicates that 99% of pharmacies are now back to normal ability to process claims, and that its internal payment processing capacity is back to 86% of normal. 80% of the function of its major platforms and products has also been restored and the group expects a full recovery in a matter of weeks. It also says that medical claims are back to near-normal processing levels, but that it is working directly with some smaller providers that remain hampered by the attack fallout and is seeking to set up alternate methods of submission for them. The ransom payment had been widely reported prior to the company’s admission, due to an otherwise inexplicable Bitcoin transaction (equaling about $22 million) traced to a wallet known to be associated with AlphV. The RaaS provider also opted to pull what appeared to be an “exit scam” after this payment rolled in, leading to the bag-holding affiliate taking to dark web forums to accuse the group of theft.


Top 10 barriers to strategic IT success

The State of the CIO Study found that staff and skills shortages was the No. 1 challenge cited by CIOs when asked what most often forces them to redirect resources away from strategic tasks, with 59% saying as much. “Talent — or the lack of it — is a huge, huge issue, and when we look at the demographics, we don’t see that changing in the future,” says Chirajeet (CJ) Sengupta, managing partner at Everest Group, a global research firm. Sengupta acknowledges that layoffs at the tech giants in the past year or so eased the talent crunch a bit — but only for other big companies who could offer highly-competitive salaries to those laid-off workers. And while CIOs are working to train internal candidates within IT and the business units for tough-to-fill tech jobs or are using contractors to fill in staffing gaps, Sengupta says those practices create talent challenges, too. Upskilling takes time, and contracted workers aren’t usually as close to the business as employees. Now there’s news that the competition for tech workers may heat up even more this year.



Quote for the day:

"'To do great things is difficult; but to command great things is more difficult.'' -- Friedrich Nietzsche

Daily Tech Digest - October 26, 2023

CTOs Look to Regain Control of the IT Roadmap

Putting an emphasis on modular architecture and open standards can ensure easier integration or disengagement from specific solutions, thereby mitigating these concerns. ... instead of an expensive and time consuming “rip and replace” model, organizations are extending the life and value of their existing ERP investments and shifting their newly freed up resources to drive innovation “around the edges” of their current robust ERP core. “This approach applies to all industries and sizes, enabling organizations to minimize churn and focus on customer value, competitive advantage and growth,” he says. The survey also indicated IT leaders are exploring alternatives to subscription-based licensing models, focusing on optimizing operational costs and aligning investments with business strategies for growth and innovation. “Applications that enable competitive advantage and differentiate a company are a high priority for organizations, while for example, ERP administration functions like HR and finance offer very little differentiation and are frequently retained as a foundational core, optimized for cost and efficiency,” Rowe explains.


Measure Developer Joy, Not Productivity, Says Atlassian Lead

So, when senior leadership is under pressure to show the outcome of one of their most sizable operating expenses, what’s a tech company to do? First, Boyagi suggested, change your questions. Instead of “How do I increase developer productivity?” or “How can I measure developer productivity?” try “How can I make developers happier?” and “How can I help developers be more productive?” The questions can help steer the conversation in a more useful direction: “I think every company has to go on a journey and do what’s right for them in terms of productivity. But I don’t think I think measurement is the thing we should be talking about.” First, because productivity for knowledge workers has always been one of the hardest things to measure. And, he added, because we need to take inspiration from other companies, not replicate what they do. Boyagi doesn’t suggest you try to do what Atlassian does. But feel free to take inspiration from and leverage its DevEx strategy, as well as those from the likes of other high-performing organizations like Google, Netflix, LinkedIn and Spotify.


How much cybersecurity expertise does a board need?

For companies who have still not yet built up the cybersecurity expertise among its directors and reporting committees, there’s work to do, says Lam, who explains there are a number of ways to build up that "cyber-IQ". “One is you should get the right board talent in terms of risk and cyber expertise that’s appropriate to their risk profiles,” says Lam, who explains that companies leery of using up a hotly contested director seat for a cyber specialist simply need to broaden their recruitment parameters. ... As organizations slowly morph their board composition, they also need to be careful to not get into a situation where one director is solely responsible for cybersecurity oversight and no one else minds that area of risk, warns Chenxi Wang ... “There’s been an explosive offering of cyber governance training in recent years. While that is a great step in the right direction, a lot of them vary as far as the quality of content goes,” Shurtleff tells CSO. “You can’t substitute somebody’s cyber experience and knowledge from a lifetime of professional experience into a two-week course. ...”


What is a business intelligence analyst? A key role for data-driven decisions

The role is becoming increasingly important as organizations move to capitalize on the volumes of data they collect through business intelligence strategies. BI analysts typically discover areas of revenue loss and identify where improvements can be made to save the company money or increase profits. This is done by mining complex data using BI software and tools, comparing data to competitors and industry trends, and creating visualizations that communicate findings to others in the organization. ... It’s a role that combines hard skills such as programming, data modeling, and statistics with soft skills such as communication, analytical thinking, and problem-solving. Candidates need a well-rounded background to balance the line between IT and the business, and usually a bachelor’s degree in computer science, business, mathematics, economics, statistics, management, accounting, or a related field. If you have a degree in an unrelated field but have completed courses in these subjects, that can suffice for an entry-level role in some organizations. Other senior positions may require an MBA, but there are plenty of BI jobs that require only an undergraduate degree.


Infrastructure teams need multi-cloud networking and security guardrails

The key is to ensure that the technology implemented is actually providing a guardrail and not imposing a speedbump or roadblock. Network and security teams need to provide infrastructure and services that are programmatic and easy to use. For instance, DevOps should be able to request IP addresses, spin up secure DNS services, request changes to firewall policies, or adjust transit routing with a couple clicks. If approvals are required from network and security teams, those approvals should be automated as much as possible. This drive toward programmatic services is apparent in my research at Enterprise Management Associates (EMA). For instance, I recently surveyed 351 IT professionals about their multi-cloud networking strategies for the report “Multi-Cloud Networking: Connecting and Securing the Future.” (Check out EMA’s free webinar to learn more about what we found in that research). In that report, 82% of respondents told us that it was at least somewhat important for their multi-cloud networking solutions to have open APIs.


Demystifying the top five OT security myths

“A common belief is that the OT protocols are proprietary, and the attacker doesn’t have access to OT devices or specific proprietary protocols,” he said. “To some extent, the proprietary nature of the OT device does pose a challenge to hacking, but threat actors behind targeted attacks are usually knowledgeable, persistent and resourceful.” Goh said such threat actors, particularly those backed by nation-states, have the resources to replicate an OT system, and create and rigorously test their malware in a lab before launching an attack. “This possibility is highly speculated in the Triton malware attack, which happened in 2017 in a malicious attempt to destroy and damage a petrochemical plant in Saudi Arabia by targeting the safety system,” he added. ... In the concept of defence-in-depth, firewalls are used to separate the different layers of an OT network. Goh said while it is mandatory to use firewalls to protect an OT network from unauthorised access, this protection is only as good as the policy and the security of the firewall. “We all know that misconfigurations of firewall rules happen and are not uncommon,” he said, citing a study that found one in five firewalls have one or two configuration issues.


JPMorgan Chase CISO explains why he's an 'AI optimist'

We've started to look at it. That's the short answer. The longer answer is, I was a bit of an AI pessimist before November of last year. Seeing ChatGPT in action for the first time and what it could do opened my mind -- perhaps many others' as well. It felt like we tipped over the precipice of an AI era. I'm an optimist about its capabilities. Most of the last nine or 10 months or so have been us trying to enable AI to use inside of the firm. We have been users of traditional AI for some time. Generative AI is newer for us in the business. We've spent the last six or seven months designing the right isolated mechanisms that are safe for us to use to produce our data. That's something we'll start doing internally as a business more broadly and think through how we use it as a cybersecurity use case. It's probably not going to be done in a generic sense in the short-term. Cybersecurity practitioners and maybe some industry consortiums need to get together to build and train the right models to support cybersecurity. It's clear to me that one, everybody's thinking about how they use AI in their tech. 


CISOs struggling to understand value of security controls data

Understanding where security controls are failing is a critical first step to mitigating cyber risk and making the right decisions. Unfortunately, only 36% of security leaders are totally confident in their security data and use it for all strategic decision making. This is a concerning finding, as without trusted data CISOs might struggle to influence senior business stakeholders and ensure the right people are held accountable for fixing security issues. ... The benefits of improving data quality and trust are clear, with 84% of security leaders believing that increasing trust in their data would help them secure more resources to protect their organization. But first there needs to be a mindset change in security leaders and the board—away from using controls data for reporting, and instead embracing it to proactively drive business decisions and stop problems before they occur. “The industry needs to change if we are to solve the CISO security controls conundrum, and Continuous Controls Monitoring (CCM) can be the catalyst. It isn’t a better reporting tool, it’s a way of knowing what to do next – making day-to-day cybersecurity firefighting easier and getting ahead of the game on strategic risk,” argues Panaseer Security Evangelist, Marie Wilcox.


How to Become a Data Governance Specialist

Generally, a DG specialist will have a bachelor’s degree in a field related to computers (information technology, computer science) and one to four years of experience. However, a combination of computer and communication skills is needed for this position. Lots of technical experience can stand in for a bachelor’s degree, but the lack of a degree will limit chances for advancements and promotions. Some employment advertisements will require a Data Governance and Stewardship certification. The certification process typically requires a degree, attending a workshop, a test, and a fair amount of experience. Certification can be difficult to get, in part because there are very few organizations offering it. This requirement may be an unrealistic expectation on the part of the employer, particularly for non-management positions. ... Much of Data Governance is actually about changing habitual behavior. When changes are made, it is common for a team to be assembled to execute the project. A Data Governance program must be presented as a practice and not a project. Projects have start and end dates. 


Has Your Architectural Decision Record Lost Its Purpose?

Sometimes the expected longevity of a decision causes a team to believe that a decision is architectural. Most decisions become long-term decisions because the funding model for most systems only considers the initial cost of development, not the long-term evolution of the system. When this is the case, every decision becomes a long-term decision. This does not make these decisions architectural, however; they need to have high cost and complexity to undo/redo in order for them to be architecturally significant. To illustrate, a decision to select a database management system is usually regarded as architectural because many systems will use it for their lifetime, but if this decision is easily reversed without having to change code throughout the system, it’s generally not architecturally significant. Modern RDBMS technology is quite stable and relatively interchangeable between vendor products, so replacing a commercial product with an open-source product, and vice versa, is relatively easy so long as the interfaces with the database have been isolated.



Quote for the day:

"The task of leadership is not to put greatness into humanity but to elicit it, for the greatness is already there." -- John Buchan