Showing posts with label digital asset. Show all posts
Showing posts with label digital asset. Show all posts

Daily Tech Digest - May 11, 2026


Quote for the day:

“The entrepreneur builds an enterprise; the technician builds a job.” -- Michael Gerber

🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 17 mins • Perfect for listening on the go.


If AI Owns the Decision, What Happens to Your Bank? 4 Smart Moves Now Will Aid Survival

The article from The Financial Brand explores the transformative role of artificial intelligence in reshaping consumer financial decision-making and the banking landscape. As AI tools become more sophisticated, they are moving beyond simple automation to provide hyper-personalized financial coaching and autonomous management. This shift allows consumers to delegate complex tasks—such as optimizing savings, managing debt, and selecting investment portfolios—to algorithms that analyze vast amounts of real-time data. For financial institutions, this evolution presents both a challenge and an opportunity; banks must transition from being mere transactional platforms to becoming proactive financial partners. The integration of generative AI is particularly highlighted as a catalyst for creating more intuitive user interfaces that can explain financial nuances in natural language. However, the piece also emphasizes the critical importance of trust and transparency. For AI to be truly effective in a banking context, providers must ensure ethical data usage and maintain a "human-in-the-loop" approach to mitigate algorithmic bias and security risks. Ultimately, the future of banking lies in a hybrid model where technology handles the heavy analytical lifting, enabling customers to achieve better financial health through data-driven confidence and streamlined digital experiences.


AI tool poisoning exposes a major flaw in enterprise agent security

In this VentureBeat article, Nik Kale examines the emerging threat of AI tool poisoning, which exposes a fundamental flaw in enterprise agent security architectures. Modern AI agents select tools from shared registries by matching natural-language descriptions, but these descriptions lack human verification. This oversight enables selection-time threats like tool impersonation and execution-time issues such as behavioral drift. While traditional software supply chain controls like code signing and Software Bill of Materials (SBOMs) effectively ensure artifact integrity, they fail to address behavioral integrity—whether a tool actually does what it claims. A malicious tool might pass all artifact checks while containing prompt-injection payloads or altering its server-side behavior post-publication to exfiltrate sensitive data. To counter this, Kale proposes a runtime verification layer using the Model Context Protocol (MCP). This system employs discovery binding to prevent bait-and-switch attacks, endpoint allowlisting to block unauthorized network connections, and output schema validation to detect suspicious data patterns. By implementing a machine-readable behavioral specification, organizations can establish a tamper-evident record of a tool's intended operations. Kale advocates for a graduated security model, beginning with mandatory endpoint allowlisting, to protect enterprise AI ecosystems from the growing risks of automated agent manipulation and data theft.


Why OT security needs bilingual leaders

The article from e27 emphasizes the critical necessity for "bilingual" leadership in the realm of Operational Technology (OT) security to bridge the widening gap between industrial operations and Information Technology (IT). As critical infrastructure becomes increasingly digitized, the traditional silos separating shop-floor engineers and corporate cybersecurity teams have become a significant liability. The author argues that true bilingual leaders are those who possess a deep technical understanding of industrial control systems alongside a sophisticated grasp of modern cybersecurity protocols. These leaders act as essential translators, capable of explaining the nuances of "uptime" and physical safety to IT departments, while simultaneously articulating the urgency of threat landscapes and data integrity to plant managers. The piece highlights that the convergence of these two worlds often results in friction due to differing priorities—where IT focuses on confidentiality, OT prioritizes availability. By fostering leadership that speaks both "languages," organizations can implement holistic security frameworks that do not compromise production efficiency. Ultimately, the article contends that the future of industrial resilience depends on a new generation of executives who can navigate the complexities of both the digital and physical domains, ensuring that cybersecurity is integrated into the very fabric of industrial engineering rather than treated as an external afterthought.


The agentic future has a technical debt problem

In the article "The Agentic Future Has a Technical Debt Problem," Barr Moses argues that the rapid, competitive deployment of AI agents is mirroring the early mistakes of the cloud migration era. Drawing on a survey of 260 technology practitioners, Moses highlights a significant disconnect between engineering leaders and the "builders" on the ground. While leadership often maintains a high level of confidence in system reliability, nearly two-thirds of organizations admitted to deploying agents faster than their teams felt prepared to support. This haste has led to a massive accumulation of technical debt; over 70% of fast-deploying builders anticipate needing to significantly rearchitect or rebuild their systems. Critical operational foundations, such as observability, governance, and traceability, are frequently sacrificed for speed, leaving engineers to deal with agents that access unauthorized data or lack manual override switches. The survey reveals that visibility into agent behavior remains a primary blind spot, with most production issues being discovered via customer complaints rather than automated monitoring. Ultimately, the piece warns that without a shift toward prioritizing infrastructure and instrumentation, the industry faces an inevitable "rebuild reckoning." Moving forward, organizations must bridge the perception gap between management and developers to ensure that agentic systems are not just shipped, but are sustainable and controllable.
The article "In Regulated Industries, Faster Testing Still Has to Be Defensible" explores the delicate balance software engineering teams in sectors like healthcare and finance must maintain between rapid AI-driven innovation and stringent compliance requirements. While there is significant pressure from stakeholders to accelerate release cycles through generative AI for test generation and defect analysis, the author emphasizes that speed must not come at the expense of auditability. In regulated environments, software must not only function correctly but also possess a comprehensive audit trail, including documented validation, end-to-end traceability, and clear evidence of control. The piece argues that AI-generated artifacts should be subject to the same rigorous version control and formal human review as traditional engineering outputs, as accountability cannot be delegated to an algorithm. Crucially, traceability should be integrated early into the planning phase rather than treated as a post-development cleanup task. Ultimately, the adoption of AI in quality engineering is most effective when it strengthens release discipline and supports human-led verification processes. By prioritizing narrow scopes, clear data access policies, and ongoing education, organizations can leverage modern technology to achieve faster delivery without sacrificing the defensibility of their testing records or risking non-compliance with regulatory frameworks.


DevSecOps explained for growing technology businesses

The article "DevSecOps explained for growing technology businesses," authored by Clear Path Security Ltd, details how small-to-medium enterprises (SMEs) can integrate security into their development lifecycles without sacrificing speed. The article defines DevSecOps as a cultural and procedural shift where security is woven into daily delivery flows rather than being a separate concluding step. For growing firms, the primary advantage lies in reducing expensive rework and late-stage surprises by catching vulnerabilities early. The framework rests on three pillars: people, process, and tooling. Instead of overwhelming teams with complex enterprise-grade protocols, the author suggests a risk-based, gradual implementation focusing on high-impact areas like customer-facing apps and sensitive data handling. Core initial controls should include automated code scanning, dependency checks, and secret detection. Success is measured not by the volume of tools, but by practical metrics like the reduction of post-release vulnerabilities and the speed of high-priority remediation. To ensure adoption, businesses are advised to follow a phased 90-day plan, starting with visibility and basic automation before scaling complexity. Ultimately, the piece argues that DevSecOps acts as a business enabler, fostering confidence and stability by aligning development speed with robust risk management through lightweight, proportionate controls that fit the organization’s specific size and technical needs.


Cuts are coming: is now the time to upskill?

The article "Cuts are coming: is now the time to upskill?" explores the critical need for IT professionals to embrace continuous learning amidst a volatile tech landscape defined by rising redundancies and the disruptive influence of artificial intelligence. Despite persistent skills shortages, the job market has tightened significantly, forcing individuals to take greater personal responsibility for their professional development, often through self-funded and self-directed methods. This shift is characterized by a move away from traditional classroom settings toward agile micro-credentials, cloud-based labs, and specialized certifications in high-demand areas like cloud computing, data analytics, and cybersecurity. While organizations recognize that upskilling existing talent is more cost-effective and resilience-building than external hiring, employer-led investment in training has paradoxically declined over the last decade. Consequently, workers are increasingly motivated by job security concerns, with a majority considering reskilling to maintain their relevance. However, the article highlights an "AI trust paradox," noting that many businesses struggle to implement transformative AI because they lack the necessary foundational data skills and internal expertise. Ultimately, staying competitive in the modern economy requires a proactive approach to skill acquisition, as the widening gap between institutional needs and available talent places the onus of career longevity squarely on the individual professional.


Cloud Security Alliance Expands Agentic AI Governance Work

The Cloud Security Alliance (CSA) has significantly expanded its commitment to securing agentic AI systems through the introduction of three major governance milestones aimed at "Securing the Agentic Control Plane." During the CSA Agentic AI Security Summit, the organization’s CSAI Foundation announced the launch of the STAR for AI Catastrophic Risk Annex, a dedicated initiative running from mid-2026 through 2027 to address high-stakes risks associated with advanced AI autonomy. Furthermore, the CSA achieved authorization as a CVE Numbering Authority via MITRE, allowing it to formally track and categorize vulnerabilities specific to the AI landscape. In a strategic move to standardize security protocols, the CSA also acquired two critical specifications: the Agentic Autonomous Resource Model and the Agentic Trust Framework. The latter, developed by Josh Woodruff of MassiveScale.AI, integrates Zero Trust principles into AI agent operations and aligns with international standards like the NIST AI Risk Management Framework and the EU AI Act. These developments reflect the CSA’s proactive approach to managing the security challenges posed by autonomous AI entities, ensuring that governance, risk management, and compliance keep pace with rapid technological evolution. By centralizing these resources, the CSA aims to provide a unified, transparent architecture for organizations to safely deploy and manage agentic technologies within their enterprise cloud environments.


Stop treating identity as a compliance step. It’s infrastructure now

In the article "Stop treating identity as a compliance step: it’s infrastructure now," Harry Varatharasan of ComplyCube argues that identity verification (IDV) has transcended its traditional role as a back-office compliance task to become foundational digital infrastructure. Across fintech, telecoms, and government services, IDV now serves as the primary mechanism for establishing trust and preventing fraud at scale. Varatharasan highlights a significant industry shift where businesses prioritize orchestration and interoperability, moving toward single, reusable identity layers rather than fragmented, siloed checks. For IDV to function as true infrastructure, it must exhibit three defining characteristics: reliability at scale, trust by design, and—most importantly—interoperability that addresses both technical compatibility and legal liability transfer. The author notes that while the UK’s digital identity consultation is a vital milestone, policy frameworks still struggle to keep pace with the industry's current reality, where the boundaries between public and private verification systems are already dissolving. Fragmentation remains a major hurdle, increasing compliance costs and creating user friction through repetitive verification steps. Ultimately, the article emphasizes that the focus must shift from simply mandating verification to governing it as a shared, portable resource, ensuring that national standards reflect the modern integrated digital economy and future cross-sector needs, while providing a seamless experience for the end-user.


The rapidly evolving digital assets and payments regulatory landscape: What you need to know

The Dentons alert outlines Australia’s sweeping regulatory overhaul of digital assets and payments, signaling the end of previous legal ambiguities. Central to this shift is the Corporations Amendment (Digital Assets Framework) Act 2026, which, starting April 2027, integrates cryptocurrency exchanges and custodians into the Australian Financial Services Licence (AFSL) regime via new categories: Digital Asset Platforms and Tokenised Custody Platforms. Concurrently, a new activity-based payments framework replaces the outdated "non-cash payment facility" concept with Stored Value Facilities (SVF) and Payment Instruments. This system captures diverse services like payment initiation and digital wallets, while excluding self-custodial software. Key consumer protections include a mandate for licensed providers to hold client funds in statutory trusts and enhanced disclosure for stablecoin issuers. Furthermore, "major SVF providers" exceeding AU$200 million in stored value will face prudential oversight by APRA. While exemptions exist for small-scale platforms and low-value services, the firm emphasizes that the transition is complex. With ASIC’s "no-action" position set to expire on June 30, 2026, and parallel AML/CTF obligations already in effect, businesses must urgently assess their licensing needs. This landmark reform ensures that digital asset and payment providers operate under a rigorous, transparent framework equivalent to traditional financial services.

Daily Tech Digest - March 31, 2026


Quote for the day:

“A bad system will beat a good person every time.” -- W. Edwards Deming


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 22 mins • Perfect for listening on the go.


World Backup Day warnings over ransomware resilience gaps

World Backup Day 2026 serves as a critical reminder of the widening gap between traditional backup strategies and the sophisticated demands of modern ransomware resilience. Industry experts emphasize that many organizations are failing to evolve their recovery plans alongside increasingly complex, fragmented cloud environments spanning AWS, Azure, and SaaS platforms. A major concern highlighted is the tendency for businesses to treat backups as a narrow IT task rather than a foundational pillar of security governance. Statistics from incident response specialists reveal a troubling reality: over half of organizations experience backup failures during significant breaches, and nearly 84% lack a single survivable data copy when first facing an attack. Experts warn that standard native tools often lack the unified visibility and immutability required to withstand malicious encryption or intentional destruction by threat actors. To address these vulnerabilities, the article advocates for a shift toward "breach-informed" recovery orchestration, which includes rigorous, real-world scenario testing and the reduction of internal "blast radiuses." Ultimately, as ransomware attacks surge by over 50% annually, the message is clear: simple data replication is no longer sufficient. True resilience requires a continuous, holistic approach that integrates people, processes, and hardened technology to ensure data is not just stored, but truly recoverable under extreme pressure.


APIs are the new perimeter: Here’s how CISOs are securing them

The rapid proliferation of application programming interfaces (APIs) has fundamentally shifted the cybersecurity landscape, making them the new organizational perimeter. As traditional endpoint protections and web application firewalls struggle to detect sophisticated business-logic abuse, Chief Information Security Officers (CISOs) are adapting their strategies to address this expanding attack surface. The rise of generative AI and autonomous agentic systems has further exacerbated risks by enabling low-skill adversaries to exploit vulnerabilities and automating high-speed interactions that can bypass legacy defenses. To counter these threats, security leaders are implementing robust governance frameworks that include comprehensive API inventories to eliminate "shadow APIs" and integrating automated security validation directly into CI/CD pipelines. A critical component of this modern defense is a shift toward identity-aware security, prioritizing the management of non-human identities and service accounts through least-privilege access. Furthermore, CISOs are centralizing third-party credential management and utilizing specialized API gateways to enforce consistent security policies across diverse cloud environments. By treating APIs as critical business infrastructure rather than mere plumbing, organizations can maintain visibility and control, ensuring that every integration is threat-modeled and continuously monitored for behavioral anomalies in an increasingly interconnected and AI-driven digital ecosystem.


Q&A: What SMBs Need To Know About Securing SaaS Applications

In this BizTech Magazine interview, Shivam Srivastava of Palo Alto Networks highlights the critical need for small to medium-sized businesses (SMBs) to secure their Software as a Service (SaaS) environments as the web browser becomes the modern workspace’s primary operating system. With SMBs typically managing dozens of business-critical applications, they face significant risks from visibility gaps, misconfigurations, and the rising threat of AI-powered attacks, which hit smaller firms significantly harder than large enterprises. Srivastava emphasizes that traditional antivirus solutions are insufficient in this browser-centric era, particularly when employees use unmanaged devices or accidentally leak sensitive data into generative AI tools. To mitigate these risks, he advocates for a "crawl, walk, run" strategy that prioritizes the adoption of a secure browser as the central command center for security. This approach allows businesses to fulfill their side of the shared responsibility model by protecting the "last mile" where users interact with data. By implementing secure browser workspaces, multi-factor authentication, and AI data guardrails, SMBs can establish a manageable yet highly effective defense. As the landscape evolves toward automated AI agents and app-to-app integrations, centering security on the browser ensures that small businesses remain protected against the next generation of automated, browser-based threats.


Developers Aren't Ignoring Security - Security Is Ignoring Developers

The article "Developers Aren’t Ignoring Security, Security is Ignoring Developers" on DEVOPSdigest argues that the traditional disconnect between security teams and developers is not due to developer negligence, but rather a failure of security processes to integrate with modern engineering workflows. The central premise is that developers are fundamentally committed to quality, yet they are often hindered by security tools that prioritize "gatekeeping" over enablement. These tools frequently generate excessive false positives, leading to alert fatigue and friction that slows down delivery cycles. To bridge this gap, the author suggests that security must "shift left" not just in timing, but in mindset—moving away from being a final hurdle to becoming an automated, invisible part of the development lifecycle. This involves implementing security-as-code, providing actionable feedback within the Integrated Development Environment (IDE), and ensuring that security requirements are defined as clear, achievable tasks rather than abstract policies. Ultimately, the piece contends that for DevSecOps to succeed, security professionals must stop blaming developers for gaps and instead focus on building developer-centric experiences that make the secure path the path of least resistance.


Beyond the Sandbox: Navigating Container Runtime Threats and Cyber Resilience

In the article "Beyond the Sandbox: Navigating Container Runtime Threats and Cyber Resilience," Kannan Subbiah explores the evolving landscape of cloud-native security, emphasizing that traditional "Shift Left" strategies are no longer sufficient against 2026’s sophisticated runtime threats. Unlike virtual machines, containers share the host kernel, creating an inherent "isolation gap" that attackers exploit through container escapes, poisoned runtimes, and resource exhaustion. To bridge this gap, Subbiah advocates for advanced isolation technologies such as Kata Containers, gVisor, and Confidential Containers, which provide hardware-level protection and secure data in use. Central to building a "digital immune system" is the implementation of cyber resilience strategies, including eBPF for deep kernel observability, Zero Trust Architectures that prioritize service identity, and immutable infrastructure to prevent configuration drift. Furthermore, the article highlights the increasing importance of regulatory compliance, referencing global standards like NIST SP 800-190, the EU’s DORA and NIS2, and Indian frameworks like KSPM. Ultimately, the author argues that true resilience requires shifting from a "fortress" mindset to an automated, proactive approach where containers are continuously monitored and secured against the volatility of the runtime environment, ensuring robust defense in a high-density, multi-tenant cloud ecosystem.


AI-first enterprises must treat data privacy as architecture, not an afterthought

In an exclusive interview, Roshmik Saha, Co-founder and CTO of Skyflow, argues that AI-first enterprises must transition from viewing data privacy as a compliance checklist to treating it as a foundational architectural requirement. As organizations accelerate their AI journeys, Saha emphasizes the necessity of isolating personally identifiable information (PII) into a dedicated data privacy vault. Because PII constitutes less than one percent of enterprise data but represents the majority of regulatory risk, treating it as a distinct data layer allows for better protection through tokenization and encryption. This approach is particularly critical for AI integration, where sensitive data often leaks into logs, prompts, and models that lack inherent access controls or deletion capabilities. Saha warns that once PII enters a large language model, remediation is nearly impossible, making prevention the only viable strategy. By embedding “privacy by design” directly into the technical stack, companies can ensure that AI systems utilize behavioral patterns rather than raw identifiers. Ultimately, this architectural shift not only simplifies compliance with regulations like India’s DPDP Act but also serves as a strategic enabler, removing legal bottlenecks and allowing businesses to innovate with confidence while safeguarding their long-term data integrity and customer trust.


The Balance Between AI Speed and Human Control

The article "The Balance Between AI Speed and Human Control" explores the critical tension between rapid technological advancement and the necessity of human oversight. It argues that issues like AI hallucinations are often inherent design consequences of prioritizing fluency and speed over safety safeguards. Currently, global governance is fragmented: the European Union emphasizes rigid regulation, the United States favors innovation with limited accountability, and India seeks a middle path focusing on deployment scale. However, each model faces significant challenges, such as algorithmic bias or systemic failures. The author suggests moving toward a "copilot" framework where AI serves as decision support rather than an autocrat. This requires implementing three interconnected architectural pillars: impact-aware modeling, context-grounded reasoning, and governed escalation with explicit thresholds for human intervention. As artificial general intelligence develops incrementally, nations must shift from treating human judgment as a bottleneck to viewing it as a vital safeguard. Ultimately, the goal is to harmonize efficiency with empathy, ensuring that technological progress does not come at the cost of moral accountability or human potential. By adopting binding technical standards for human overrides in consequential decisions, society can ensure that AI remains a tool for empowerment rather than an uncontrolled force.


Securing agentic AI is still about getting the basics right

As agentic AI workflows transform the enterprise landscape, Sam Curry, CISO of Zscaler, emphasizes that robust security remains grounded in fundamental principles. Speaking at the RSAC 2026 Conference, Curry highlights a major shift toward silicon-based intelligence, where AI agents will eventually conduct the majority of internet transactions. This evolution necessitates a renewed focus on two primary pillars: identity management and runtime workload security. Unlike traditional methods, securing these agents requires sophisticated frameworks like SPIFFE and SPIRE to ensure rigorous identification, verification, and authentication. Organizations must implement granular authorization controls and zero-trust architectures to contain risks, such as autonomous agent sprawl or unauthorized data access. Furthermore, while automation can streamline governance and compliance, Curry warns that security in adversarial environments still requires human judgment to counter unpredictable threats. Ultimately, the successful deployment of agentic AI depends on mastering the basics—cleaning infrastructure, establishing clear accountability, and ensuring auditability. By treating AI agents as distinct identities within a segmented network, businesses can foster innovation without sacrificing security. This balanced approach ensures that as technology advances, the underlying security architecture remains resilient against emerging threats in a world increasingly dominated by autonomous digital entities.


Can Your Bank’s IT Meet the Challenge of Digital Assets?

The article from The Financial Brand examines the "side-core" (or sidecar) architecture as a transformative solution for traditional banks seeking to integrate digital assets and stablecoins into their operations. Traditional banking core systems are often decades old and technically incapable of supporting the high-precision ledgers—often requiring eighteen decimal places—and the 24/7/365 real-time settlement demands of blockchain-based assets. Rather than attempting a costly and risky "rip-and-replace" of these legacy cores, financial institutions are increasingly adopting side-cores: modern, cloud-native platforms that run in parallel with the main system. This specialized architecture allows banks to issue tokenized deposits, manage stablecoins, and facilitate instant cross-border payments while maintaining their established systems for traditional functions. By leveraging a side-core, banks can rapidly deploy crypto-native services, attract younger demographics, and secure new deposit streams without significant operational disruption. The article highlights that as regulatory clarity improves through frameworks like the GENIUS Act, the ability to operate these dual systems will become a key competitive advantage for regional and community banks. Ultimately, the side-core approach provides a modular path toward modernization, allowing traditional institutions to remain relevant in an era defined by programmable finance and digital-native commerce.


Everything You Think Makes Sprint Planning Work, Is Slowing Your Team Down!

In his article, Asbjørn Bjaanes argues that traditional Sprint Planning "best practices"—such as assigning work and striving for accurate estimation—actually undermine team agility by stifling ownership and clarity. He identifies several key pitfalls: first, leaders who assign stories strip developers of their internal sense of control, turning owners into compliant executors. Instead, teams should self-select work to foster initiative. Second, estimation should be viewed as an alignment tool rather than a forecasting exercise; "estimation gaps" are vital opportunities to surface hidden complexities and synchronize mental models. Third, the author warns against mid-sprint interruptions and automatic story rollovers. Rolling over unfinished work without scrutiny ignores shifting priorities and cognitive biases, while unplanned additions break the sanctity of the team’s commitment. Furthermore, Bjaanes emphasizes that a Sprint Backlog without a clear, singular goal is merely a "to-do list" that leaves teams directionless under pressure. Ultimately, real improvement requires shifting underlying beliefs about control and trust rather than simply refining process steps. By embracing healthy disagreement during planning and protecting the team’s autonomy, organizations can move beyond mere compliance toward true high performance, ensuring that planning serves as a strategic compass rather than an administrative burden.

Daily Tech Digest - March 10, 2025


Quote for the day:

“You get in life what you have the courage to ask for.” -- Nancy D. Solomon



The Reality of Platform Engineering vs. Common Misconceptions

In theory, the definition of platform engineering is straightforward. It's a practice that involves providing a company's software developers with access to preconfigured toolchains, workflows, and environments, typically through the use of what's called an Internal Developer Platform (IDP). The goal behind platform engineering is also straightforward: It's to help developers work more efficiently and with fewer risks by allowing them to spin up compliant, ready-made solutions whenever they need them, rather than having to implement everything from scratch. ... Misuses of the term platform engineering aren't all that surprising. A similar phenomenon occurred when DevOps entered the tech lexicon in the late 2000s. Instead of universal recognition of DevOps as a distinct philosophy that involves melding software development to IT operations work, some folks effectively began using DevOps as a catch-all term to refer to anything modern or buzzworthy in the realm of software engineering. The same thing seems to be happening now in platform engineering. The term is apparently being used, at least by some professionals, to refer to any work that involves using a platform of some kind within the context of software development.


Why AI needs a kill switch – just in case

How do you develop your “AI kill switch?” The answer lies in protecting securing the entire machine-driven ecosystem that AI depends on. Machine identities, such as digital certificates, access tokens and API keys – authenticate and authorise AI functions and their abilities to interact with and access data sources. Simply put, LLMs and AI systems are built on code, and like any code, they need constant verification to prevent unauthorised access or rogue behaviour. If attackers breach these identities, AI systems can become tools for cybercriminals, capable of generating ransomware, scaling phishing campaigns and sowing general chaos. Machine identity security ensures AI remains trustworthy, even as they scale to interact with complex networks and user bases – tasks that can and will be done autonomously via AI agents. Without strong governance and oversight, companies risk losing visibility into their AI systems, leaving them vulnerable. Attackers can exploit weak security measures, using tactics like data poisoning and backdoor infiltration – threats that are evolving faster than many organisations realise. ... Machine identity security is a critical first step – it establishes trust and resilience in an AI-driven world. This becomes even more urgent as agentic AI takes on autonomous decision-making roles across industries.


Cyber resilience under DORA – are you prepared for the challenge?

Many damaging breaches have originated from within digital supply chains, through third-party vulnerabilities, or from internal weaknesses. In 2023, third-party attacks led to 29% of breaches with 75% of third-party breaches targeting the software and technology supply chain. This evolving threat landscape has forced financial institutions to rethink their approach. The future of cyber resilience isn’t about building higher walls - it’s about securing every layer, inside and out. ... One of the most pressing concerns for financial institutions under DORA is the security of their digital supply chains. High-profile cyberattacks in recent years have demonstrated that vulnerabilities often originate not from within an organization's own IT infrastructure, but through weaknesses in third-party service providers, cloud platforms, and outsourced IT partners. DORA places a strong emphasis on third-party risk management, making it clear that security responsibility extends beyond a firm’s immediate network. Ensuring supply chain resilience requires a proactive and continuous approach. FSIs must conduct regular security assessments of all external vendors, ensuring that partners adhere to the same high standards of cybersecurity and risk management. 


Ask a Data Ethicist: How Can We Ethically Assess the Influence of AI Systems on Humans?

Bezou-Vrakatseli et al provides some guidance in this paper, which outlines the S.H.A.P.E. framework. S.H.A.P.E. stands for secrecy, harm, agency, privacy, and exogeneity. ... If you are not aware that you are being influenced or are unaware of the way in which the influence is taking place, there might be an ethical issue. The idea of intent to influence while keeping that intent a secret, speaks to ideas of deception or trickery. ... You might be wondering – what actually constitutes harm? It’s not just physical harm. There are a range of possible harms including mental health and well being, psychological safety, and representational harms. The authors note that this issue of what is harm – ethically speaking – is contestable, and that lack of consensus can make it difficult to address. ... Human agency has “intrinsic moral value” – that is to say we value it in and of itself. Thus, anything that messes with human agency is generally seen as unethical. There can be exceptions, and we sometimes make these when the human in question might not be able to act in their own best interests. ... Influence may be unethical if there is a violation of privacy. Much has been written about why privacy is valuable and why breaches of privacy are an ethical issue. The authors cite the following – limiting surveillance of citizens, restricting access to certain information, and curtailing intrusions into places deemed private or personal.


Is It Time to Replace Your Server Room with a Data Center?

Rare is the business that starts its IT journey with a full-fledged data center. The more typical route involves creating a server room first, then upgrading to a data center over time as IT needs expand. That raises the question: When should a business replace its server room with a data center? Which performance, security, cost and other considerations should a company weigh when deciding to switch? ... For some companies, the choice between a server room and a data center is clear-cut. A server room best serves small businesses without large-scale IT needs, whereas enterprises typically need a “real” data center. For medium-sized companies, the choice is often less clear. If a business has been getting by for years with just a server room, there is often no single tell-tale sign indicating it’s time to upgrade to a data center. And there is a risk that doing so will cost a lot of money without being necessary. ... A high incidence of server outages or downtime is another good reason to consider moving to a data center. That’s especially true if the outages stem from issues inherent to the nature of the server room – such as power system failures within the entire building, which are less of a risk inside a data center with its own dedicated power source.


How to safely dispose of old tech without leaving a security risk

Printers, especially those with built-in memory or hard drives, can retain copies of documents that were printed or scanned. Routers can store personal information related to network activity, including IP addresses, usernames, and Wi-Fi passwords. Meanwhile, smart TVs, home assistants (like Alexa, Google Home), and smart thermostats may store voice recordings, usage patterns, personal preferences, and even login credentials for streaming services like Netflix and Amazon Prime. As IoT devices become more common, they are increasingly at risk of storing sensitive data. ... Before disposing of a device, it’s essential to completely erase any confidential data. Deleting files or formatting the drive alone isn’t enough, as the data can still be retrieved. The best method for securely wiping data varies depending on the device. ... Windows users can use the “Reset this PC” feature with the option to remove all files and clean the drive, while macOS users can use “Erase Disk” in Disk Utility to securely wipe storage before disposal. Tools like DBAN (Darik’s Boot and Nuke) and BleachBit can also help securely erase data. DBAN is specifically designed to wipe traditional hard drives (HDDs) by completely erasing all stored data. However, it does not support solid-state drives (SSDs), as excessive overwriting can shorten their lifespan.


The great software rewiring: AI isn’t just eating everything; it is everything

Right now, most large language models (LLMs) feel like a Swiss Army knife with infinite tools — exciting but overwhelming. Users don’t want to “figure out” AI. They want solutions, AI agents tailored for specific industries and workflows. Think: legal AI drafting contracts, financial AI managing investments, creative AI generating content, scientific AI accelerating research. Broad AI is interesting. Vertical AI is valuable. Right now, LLMs are too broad, too abstract, too unapproachable for most. A blank chat box is not a product, it is homework. If AI is going to replace applications, it must become invisible, integrating seamlessly into daily workflows without forcing users to think about prompts, settings or backend capabilities. The companies that succeed in this next wave will not just build better AI models, but better AI experiences. The future of computing is not about one AI that does everything. It is about many specialized AI systems that know exactly what users need and execute on that flawlessly. ... The old software model was built on scarcity. Control distribution, limit access, charge premiums. AI obliterates this. The new model is fluid, frictionless,and infinitely scalable.


Cybersecurity: The “What”, the “How” and the “Who” of Change

Cybersecurity is more complex than that: Protecting the firm from cyberthreats requires the ability to reach across corporate silos, beyond IT, towards business and support functions, as well as digitalised supply chains. You can throw as much money as you like to the problem, but if you give it to a technologist CISO to resolve, they will address it as a technology matter. They will put ticks on compliance checklists. They will close down audit points. They will deal with incidents and put out fires. They will deploy countless tools (to the point where this is now becoming a major operational issue). But they will not change the culture of your organisation around business protection and breaches will continue to happen as threats evolve. A lot has been said and written about the role of the “transformational CISO”, but I doubt there are many practitioners in the current generation of CISOs who can successfully wear that mantel. Simply because most have spent the last decade firefighting cyber incidents and have never been able to project a transformative vision over the mid to long-term, let alone deliver it. They have not developed the type of political finesse, of personal gravitas, of leadership in one word, that they would require to be trusted and succeed at delivering a truly transformative agenda across the complex and political silos of the modern enterprise.


CISOs and CIOs forge vital partnerships for business success

“One of the characteristics of a business-aligned CISO is they don’t use the veto card in every instance,” Ijam explains. “When the CISO is at the table and understands the importance of outcomes and deliverables from a business perspective as well as risk management from a security perspective, they are able to pick their battles in a smart way.” Forging a peer CIO/CISO partnership also requires the right set of leaders. While CIOs have been honing a business orientation for years, CISOs need to follow suit, maturing into a role that understands business strategy and is well-versed in the language so they command a seat at the table. “The right CISO leader is someone that doesn’t speak in ones and zeros,” Whiteside says. “They need to be at the table talking in terms that business leaders understand — not about firewalls and malware.” Becoming a C-suite peer also means cultivating an independent voice — important because CIOs and CISOs often have varying points of view, separate priorities, and different tolerances for risk. It’s equally important to make sure the CISO’s voice — and security recommendations — are part of every discussion related to business strategy, IT infrastructure, and critical systems at the beginning, not as an afterthought.


India’s Digital Personal Data Protection Act: A bold step with unfinished business

The release of the draft Digital Personal Data Protection Rules, 2025, on 3rd of January aim to operationalise the provisions of the Act. The Act will undoubtedly go a long way in safeguarding digital personal data. Whilst the benefits to the common citizen are laudable, there are clearly areas of that need to be urgently addressed. ... The draft rules mandate data localisation, restricting the transfer of certain personal data outside India. This approach has faced criticism for potentially increasing operational costs for businesses and creating barriers to global data flows. A flexible approach could be taken with regard to data flows with Friendly and Trusted Nations. Allowing cross-border data transfers to trusted jurisdictions with robust data protection frameworks will position India as a key player in Global trade. India wants to increase exports of goods and services to achieve it’s vision of “Viksit Bharat” by 2047. ... The introduction of clear, technology-driven mechanisms for age verification without being overly intrusive need to be determined. Implementing this rule from a pragmatic perspective will be onerous. Self- declaration may turn out to be a potential way forward, given India’s massive rural population that accesses online services and platforms and the difficulty of implementing parental consent.

Daily Tech Digest - July 30, 2023

What Is Data Strategy and Why Do You Need It?

Developing a successful Data Strategy requires careful consideration of several key steps. First, it is essential to identify the business goals and objectives that the Data Strategy will support. This will help determine what data is needed and how it should be collected, analyzed, and used. Next, it is important to assess the organization’s current data infrastructure and capabilities. This includes evaluating existing databases, data sources, tools, and processes for collecting and managing data. It also involves identifying current gaps in skills or technology that need to be addressed. Once these foundational elements are in place, organizations can begin to define their approach to Data Governance. This involves establishing policies and procedures for managing Data Quality, security, privacy, compliance, and access. It may also involve developing a framework for decision-making that ensures the right people have access to the right information at the right time. Finally, organizations should consider how they will measure success in implementing their Data Strategy. 


Battling Technical Debt

Technical debt costs you money and takes a sizable chunk of your budget. For example, a 2022 Q4 survey by Protiviti found that, on average, an organization invests more than 30% of its IT budget and more than 20% of its overall resources in managing and addressing technical debt. This money is being taken away from building new and impactful products and projects, and it means the cash might not be there for your best ideas. ... Technical debt impacts your reputation. The impact can be huge and result in unwanted media attention and customers moving to your competitors. In an article about technical debt, Denny Cherry attributes performance woes by US airline Southwest Airlines to poor investment in updating legacy equipment, which caused difficulties with flight scheduling as a result of "outdated processes and outdated IT." If you can't schedule a flight, you're going to move elsewhere. Furthermore, in many industries like aviation, downtime results in crippling fines. These could be enough to tip a company over the edge.


‘Audit considerations for digital assets can be extremely complex’

Common challenges when auditing crypto assets include understanding and evaluating controls over access to digital keys, reconciliations to the blockchain to verify existence of assets, considerations around service providers in terms of qualifications, availability and scope, and forms of reporting, among others. As the technology is rapidly evolving, the regulatory standards do not yet capture all crypto offerings. Everyone is operating in an uncertain regulatory environment, where the speed of change is significant for all participants. If you take accounting standards, for example, a common discussion today is how to measure these assets. Under IFRS, crypto assets are generally recognized as an intangible asset and recorded at cost. While this aligns with the technical requirements of the standards, it sometimes generates financial reporting that may not be well understood by users of the financial information who may be looking for the fair value of these assets.


Does AI have a future in cyber security? Yes, but only if it works with humans

One technique that has been around for a while is rolling AI technology into security operations, especially to manage repeating processes. What the AI does is filter out the noise, identifies priority alerts and screens these out. The other thing it is capable of is capturing this data and being able to look for any anomalies and joining the dots. Established vendors are already providing capabilities like this. Here at Nominet, we have masses of data coming into our systems every day, and being able to look at correlations to identify malicious and anomalous behaviour is very valuable. But once again we find ourselves in the definition trap. Being alerted when rules are triggered is moving towards ML, not true AI. But if we could give the system the data and ask it to find us what looked truly anomalous, that would be AI. Organisations might get tens of thousands of security logs at any point in time. Firstly, how do you know if these logs show malicious activity and if so, what is the recommended course of action? 


Moody’s highlights DLT cyber risks for digital bonds

The body of the paper warns of the cyber risks of smaller public blockchains, which are less decentralized and hence more vulnerable to attacks. It considers private DLTs are more secure than similar (small) sized public blockchains because they have greater access controls. Moody’s acknowledges that larger Layer 1 public blockchains such as Ethereum are far harder to attack, but upgrades to the network carry risks. A major challenge is the safeguarding of private keys. In reality the most significant risks relate to the platforms themselves, bugs in smart contracts and oracles which introduce external data. It notes that currently many solutions don’t have cash on ledger, which reduces the attack surface. In reality this makes them less attractive to attack. As cash on ledger becomes more widespread, this enables greater automation. Manipulating smart contract weaknesses could result in unintended payouts and other vulnerabilities. Moody’s specifically mentions the risks associated with third party issuance platforms such as HSBC Orion, DBS, and Goldman Sachs’ GS DAP.


Cyber Resilience Act: EU Regulators Must Strike the Right Balance to Avoid Open Source Chilling Effect

The good news is that developers are willing to work with regulators in fine-tuning the act. And why not get them involved? They know the industry, count deep insights into prevailing processes and fully grasp the intricacies of open source. Additionally, open source is too lucrative and important to ignore. One suggestion is to clarify the wording. For example, replace “commercial activity” with “paid or monetized product.” This will go some way to narrowing the act’s scope and ensuring that open-source projects are not unnecessarily targeted. Another is differentiating between market-ready software products and stand-alone components, ensuring that requirements and obligations are appropriately tailored. Meanwhile, regulators can provide funding in the legislation to actively support open source. For example, Germany grants resources to support developers in maintaining open-source software projects of strategic importance. A similar sovereign tech fund could prove instrumental in supporting and protecting the industry across the continent.


Organizational Resilience And Operating At The Speed Of AI

The challenge becomes—particularly for mid-market organizations that may not have the resources of their larger competitors—how to corral resources to ensure they can effectively incorporate AI. If businesses are to achieve the kind of organizational resilience that is necessary to build sustainable enterprises, they must accept that AI and automation will fundamentally change company structures, culture and operations. Much of this will require investment in “intangible goods, such as business processes and new skills,” as suggested in the Brookings Institute article, but I would like to add one additional imperative: data gravity. ... To operate at the speed of AI, systems must be able to access all the information within an organization’s disparate IT infrastructure. That data must be secure, have integrity and be without bias. AI requires data agility. Therefore, organizations should employ a data gravity strategy whereby all the data within an organization is consolidated into a central hub, creating a single view of all the information. 


As Ransomware Monetization Hits Record Low, Groups Innovate

With ransomware profits in decline, groups have been exploring fresh strategies to drive them back up. While groups such as Clop have shifted tactics away from ransomware to data theft and extortion, other groups have been targeting larger victims, seeking bigger payouts. Some affiliates have been switching ransomware-as-a-service provider allegiance, with many Dharma and Phobos business partners adopting a new service named 8Base, Coveware says. Numerous criminal groups continue to wield crypto-locking malware. The most number of successful attacks it saw during the second quarter involved either BlackCat or Black Basta ransomware, followed by Royal, LockBit 3.0, Akira, Silent Ransom and Cactus. One downside of crypto-locking malware is that attacks designed to take down the largest possible victims, in pursuit of the biggest potential ransom payment, typically demand substantial manual effort, including hands on keyboard time. Groups may also need to purchase stolen credentials for the target from an initial access broker, pay penetration testing experts or share proceeds with other affiliates.


How Indian organisations are keeping pace with cyber security

Jonas Walker, director of threat intelligence at Fortinet, said the digitisation of retail and the rise of e-commerce makes those sectors susceptible to payment card data breaches, supply chain attacks and attacks targeting customer information. “Educational institutions also hold a wealth of personal information, including student and faculty data, making them attractive targets for data breaches and identity theft,” he added. But enterprises in India are not about to let the bad actors get their way. Sakra World Hospital, for example, has segmented its networks and implemented role-based access, endpoint detection and response, as well as zero-trust capabilities for its internal network. It also conducts vulnerability assessments and penetration tests to secure its external assets. “Zero-trust should be implemented on your external security appliances as well,” he added. “The notification system should be strong and prompt so that action can be taken immediately to mitigate any cyber security risk.”


How Can Blockchain Lead to a Worldwide Economic Boom?

The inherent trustworthiness of distributed ledgers is a key factor here in that they greatly enhance critical economic drivers like supply chain management, land ownership, and the distribution of government and non-government services. At the same time, blockchain’s support of digital currencies provides greater access to capital, in large part by side-stepping the regulatory frameworks that govern sovereign currencies. And perhaps most importantly, blockchain helps to stymie public corruption and the diversion of funds away from their intended purpose, which allows capital and profits to reach those who have earned them and will put them to more productive uses. None of this should imply that blockchain will put the entire world on easy streets. Significant challenges remain, not the least of which is the cost to establish the necessary infrastructure to support secure digital ledgers. Multiple hardened data centers are required to prevent hacking, along with high-speed networks to connect them.



Quote for the day:

"Leadership is a privilege to better the lives of others. It is not an opportunity to satisfy personal greed." -- Mwai Kibaki

Daily Tech Digest - December 19, 2021

Data Science Collides with Traditional Math in the Golden State

San Francisco’s approach is the model for a new math framework proposed by the California Department of Education that has been adopted for K-12 education statewide. Like the San Francisco model, the state framework seeks to alter the traditional pathway that has guided college-bound students for generations, including by encouraging middle schools to drop Algebra (the decision to implement the recommendations is made by individual school districts). This new framework has been received with some controversy. Yesterday, a group of university professors wrote an open letter on K-12 mathematics, which specifically cites the new California Mathematics Framework. “We fully agree that mathematics education ‘should not be a gatekeeper but a launchpad,’” the professors write. “However, we are deeply concerned about the unintended consequences of recent well-intentioned approaches to reform mathematics, particularly the California Mathematics Framework.” Frameworks like the CMF aim to “reduce achievement gaps by limiting the availability of advanced mathematical courses to middle schoolers and beginning high schoolers,” the professors continued.


Promoting trust in data through multistakeholder data governance

A lack of transparency and openness of the proceedings, or barriers to participation, such as prohibitive membership fees, will impede participation and reduce trust in the process. These challenges are particularly felt by participants from low- and middle-income countries (LICs and LMICs), whose financial resources and technical capacity are usually not on par with those of higher-income countries. These challenges affect both the participatory nature of the process itself and the inclusiveness and quality of the outcome. Even where a level playing field exists, the effectiveness of the process can be limited if decision makers do not incorporate input from other stakeholders. Notwithstanding the challenges, multistakeholder data governance is an essential component of the “trust framework” that strengthens the social contract for data. In practice, this will require supporting the development of diverse forums—formal or informal, digital or analog—to foster engagement on key data governance policies, rules, and standards, and the allocation of funds and technical assistance by governments and nongovernmental actors to support the effective participation of LMICs and underrepresented groups.


A Plan for Developing a Working Data Strategy Scorecard

Strategy is an evolving process, with regular adjustments expected as progress is measured against desired goals over longer timeframes. “There’s always an element of uncertainty about the future,” Levy said, “so strategy is more about a set of options or strategic choices, rather than a fixed plan.” It’s common for companies to re-evaluate and adjust accordingly as business goals evolve and systems or tools change. Before building a strategy, people often assume that they must have vision statements or mission statements, a SWOT analysis, or goals and objectives. These are good to have, he said, but in most instances, they are only available after the strategy analysis is completed. “When people establish their Data Strategies, it’s typically to address limitations they have and the goals that they want. Your strategy, once established, should be able to answer these questions.” But again, Levy said, it’s after the strategy is developed, not prior. Although it can be difficult to understand the purpose of a Data Strategy, he said, it’s critically important to clearly identify goals and know how to communicate them to the intended audience.


“Less popular” JavaScript Design Patterns.

As software engineers, we strive to write maintainable, reusable, and eloquent code that might live forever in large applications. The code we create must solve real problems. We are certainly not trying to create redundant, unnecessary, or “just for fun” code. At the same time, we frequently face problems that already have well-known solutions that have been defined and discussed by the Global community or even by our own teams millions of times. Those solutions to such problems are called “Design patterns”. There are a number of existing design patterns in software design, some of them are used more often, some of them less frequently. Examples of popular JavaScript design patterns include factory, singleton, strategy, decorator, and observer patterns. In this article, we’re not going to cover all of the design patterns in JavaScript. Instead, let’s consider some of the less well-known but potentially useful JS patterns such as command, builder, and special case, as well as real examples from our production experience.


Software Engineering | Coupling and Cohesion

The purpose of Design phase in the Software Development Life Cycle is to produce a solution to a problem given in the SRS(Software Requirement Specification) document. The output of the design phase is Software Design Document (SDD). Basically, design is a two-part iterative process. First part is Conceptual Design that tells the customer what the system will do. Second is Technical Design that allows the system builders to understand the actual hardware and software needed to solve customer’s problem. ... If the dependency between the modules is based on the fact that they communicate by passing only data, then the modules are said to be data coupled. In data coupling, the components are independent of each other and communicate through data. Module communications don’t contain tramp data. Example-customer billing system. In stamp coupling, the complete data structure is passed from one module to another module. Therefore, it involves tramp data. It may be necessary due to efficiency factors- this choice was made by the insightful designer, not a lazy programmer.


5 Takeaways from SmartBear’s State of Software Quality Report

As API adoption and growth continues, standardization (52%) continues to rank as the top challenge organizations hope to solve soon as they look to scale. Without standardization, APIs become bespoke and developer productivity declines. Costs and time-to-market increase to accommodate changes, the general quality of the consumer experience wanes, and it leads to a lower value proposition and decreased reach. Additionally, the consumer persona in the API landscape is rightfully getting more attention. Consumer expectations have never been higher. API consumers demand standardized offerings from providers and will look elsewhere if expectations around developer experience isn’t met, which is especially true in financial services. Security (40%) has thankfully crept up in the rankings to number two this year. APIs increasingly connect our most sensitive data, so ensuring your APIs are secure before, during, and after production is imperative. Applying thoughtful standardization and governance guiderails are required for teams to deliver good quality and secure APIs consistently.


From DeFi year to decade: Is mass adoption here? Experts Answer, Part 1

More scaling solutions will become essential to the mass adoption of DeFi products and services. We are seeing that most DeFi applications go live on multiple chains. While that makes them cheaper to use, it adds more complexities for those who are trying to learn and understand how they work. Thus, to start the second phase of DeFi mass adoption, we need solutions that simplify onboarding and use DApps that are spread across different chains and scaling solutions. The endgame is that all the cross-chain actions will be in the background, handled by infra services such as Biconomy or the DApp themselves, so the user doesn’t need to deal with it themselves. ... Going into 2022 and equipped with the right layer-one networks, we’re aiming for mass adoption. To achieve that, we need to eradicate the entry barriers for buying and selling crypto through regulated fiat bridges (such as banks), overhaul the user experience, reduce fees, and provide the right guide rails so everyone can easily and safely participate in the decentralized economy. DeFi is legitimizing crypto and decentralized economies. Traditional financial institutions are already starting to participate. In 2022, we will only see an uptick in usage and adoption.


Serious Security: OpenSSL fixes “error conflation” bugs – how mixing up mistakes can lead to trouble

The good news is that the OpenSSL 1.1.1m release notes don’t list any CVE-numbered bugs, suggesting that although this update is both desirable and important, you probably don’t need to consider it critical just yet. But those of you who have already moved forwards to OpenSSL 3 – and, like your tax return, it’s ultimately inevitable, and somehow a lot easier if you start sooner – should note that OpenSSL 3.0.1 patches a security risk dubbed CVE-2021-4044. ... In theory, a precisely written application ought not to be dangerously vulnerable to this bug, which is caused by what we referred to in the headline as error conflation, which is really just a fancy way of saying, “We gave you the wrong result.” Simply put, some internal errors in OpenSSL – a genuine but unlikely error, for example, such as running out of memory, or a flaw elsewhere in OpenSSL that provokes an error where there wasn’t one – don’t get reported correctly. Instead of percolating back to your application precisely, these errors get “remapped” as they are passed back up the call chain in OpenSSL, where they ultimately show up as a completely different sort of error.


Digital Asset Management – what is it, and why does my organisation need it?

DAM technology is more than a repository, of course. Picture it as a framework that holds a company’s assets, on top of which sits a powerful AI engine capable of learning the connections between disparate data sets and presenting them to users in ways that make the data more useful and functional. Advanced DAM platforms can scale up to storing more than ten billion objects – all of which become tangible assets, connected by the in-built AI -- at the same time. This has the capacity to result in a huge rise in efficiency around the use of assets and objects. Take, for example, a busy modern media marketing agency. In the digital world, they are faced with a massive expansion of content at the same time as release windows are shrinking – coupled with the issue of increasingly complex content creation and delivery ecosystems. A DAM platform can manage those huge volumes of assets - each with their complex metadata - at speeds and scale that would simply break a legacy system. Another compelling example of DAM in action includes a large U.S.-based film and TV company, which uses it for licencing management.


Impact of Data Quality on Big Data Management

A starting point for measuring Data Quality can be the qualities of big data—volume, velocity, variety, veracity—supplemented with a fifth criterion of value, made up the baseline performance benchmarks. Interestingly, these baseline benchmarks actually contribute to the complexity of big data: variety such as structured, unstructured, or semi-structured increases the possibility of poor data; data channels such as streaming devices with high-volume and high-velocity data enhances the chances of corrupt data—and thus no single quality metric can work on such voluminous and multi-type data. The easy availability of data today is both a boon and a barrier to Enterprise Data Management. On one hand, big data promises advanced analytics with actionable outcomes; on the other hand, data integrity and security are seriously threatened. The Data Quality program is an important step in implementing a practical DG framework as this single factor controls the outcomes of business analytics and decision-making. ... Another primary challenge that big data brings to Data Quality Management is ensuring data accuracy, without which, insights would be inaccurate. 



Quote for the day:

"There is no "one" way to be a perfect leader, but there are a million ways to be a good one." -- Mark W. Boyer