Showing posts with label PCI DSS. Show all posts
Showing posts with label PCI DSS. Show all posts

Daily Tech Digest - January 08, 2026


Quote for the day:

“When opportunity comes, it’s too late to prepare.” -- John Wooden



All in the Data: The State of Data Governance in 2026

For years, Non-Invasive Data Governance was treated as the “nice” approach — the softer way to apply discipline without disruption. But 2026 has rewritten that narrative. Now, NIDG is increasingly seen as the only sustainable way to govern data in a world of continuous transformation. Traditional “assign people to be stewards” approaches simply cannot keep up with agentic AI, edge analytics, real-time data products, and the modern demand for organizational agility. ... Governance becomes the spark that ignites faster value, safer AI, more confident decision-making, and a culture that welcomes transformation instead of bracing for it. This catalytic effect is why organizations that embrace “The Data Catalyst³” in 2026 are not merely improving — they are accelerating, compounding their gains, and outpacing peers who still treat governance as a slow, procedural necessity rather than the engine of modern data excellence. ... This year, metadata is no longer an afterthought. It is the bloodstream of governance. Organizations are finally acknowledging that without shared understanding, consistent definitions, and a reliable inventory of where data comes from and who touches it, AI will hallucinate confidently while leaders make decisions blindly. ... Perhaps the greatest evolution in 2026 is the rise of governance that keeps pace with AI. Organizations can no longer review policies once a year or update data inventories only during budget cycles. Decision cycles are compressing. Change windows are shrinking. 


The Next Two Years of Software Engineering

AI unlocks massive demand for developers across every industry, not just tech. Healthcare, agriculture, manufacturing, and finance all start embedding software and automation. Rather than replacing developers, AI becomes a force multiplier that spreads development work into domains that never employed coders. We’d see more entry-level roles, just different ones: “AI-native” developers who quickly build automations and integrations for specific niches. ... Position yourself as the guardian of quality and complexity. Sharpen your core expertise: architecture, security, scaling, domain knowledge. Practice modeling systems with AI components and think through failure modes. Stay current on vulnerabilities in AI-generated code. Embrace your role as mentor and reviewer: define where AI use is acceptable and where manual review is mandatory. Lean into creative and strategic work; let the junior+AI combo handle routine API hookups while you decide which APIs to build. ... Lean into leadership and architectural responsibilities. Shape the standards and frameworks that AI and junior team members follow. Define code quality checklists and ethical AI usage policies. Stay current on compliance and security topics for AI-produced software. Focus on system design and integration expertise; volunteer to map data flows across services and identify failure points. Get comfortable with orchestration platforms. Double down on your role as technical mentor: more code reviews, design discussions, technical guidelines.


What will IT transformation look like in 2026, and how do you know if you're on the right track?

The IT organization will become the keeper of the journal in terms of business value, and a lot of organizations haven't developed those muscles yet. ... Technical complexity remains a huge challenge. Back-end systems are becoming more complicated, requiring stronger architecture frameworks, faster design cycles and reliable data access to support emerging agentic AI frameworks. ... "Many IT organizations have taken the easy way," said de la Fe, referring to cloud and application service providers. As a result, their data is spread across different environments. Organizations may technically own their data, he said, but "it isn't with them -- or architected in a manner where they can access and use it as they may need to." ... "They believe it's a period of architectural redux because applications are becoming more heterogeneous," Vohra said. "Their architecture must be more modular and open, but they can't simply say no to core applications, because the business will demand them. They must be more responsive to the business than ever before." ... Without business-IT alignment, IT cannot deliver the business impact the organization now expects. CIOs are under increasing pressure from senior leadership and boards to improve efficiency and deliver business value, as measured in business KPIs rather than traditional IT KPIs. On the technology side, CIOs also need to ensure they are architecting for the future. 


Why CISOs Must Adopt the Chief Risk Officer Playbook

As the threat landscape becomes increasingly complex due to AI acceleration, shifting regulations, and geopolitical volatility, the role of the security leader is evolving. For CISOs and their teams, the McKinsey research provides a blueprint for transforming from technical gatekeepers into strategic risk leaders. ... A common question in the industry is whether a company needs both a Chief Risk Officer and a Chief Information Security Officer (CISO). ... Understanding the difference in what these two leaders look for is key to collaboration. Primary goal for CRO: Protect the organization's financial health and long-term viability. Primary goal for the CISO: Protect the confidentiality, integrity, and availability of digital assets. Key metric for CRO: Risk-adjusted return on capital and insurance premium outcomes. Key metric for CISO: Mean time to detect (MTTD), threat actor activity, and control effectiveness. Focus area for CRO: Market shifts, credit risk, geopolitical crises, and supply chain fragility. Focus area for CISO: Vulnerabilities, phishing campaigns, ransomware, and insider threats. Outcome for CRO: Ensuring the business can survive any "bad day," financial or otherwise. Outcome for CISO: Ensuring the digital infrastructure is resilient against constant attack. ... The next generation of cybersecurity leaders will not just be the ones who can write the best code or configure the tightest firewall. They will be the ones who can walk into a boardroom, speak the language of the CRO, and explain how a specific technical risk impacts the organization's bottom line.


Passwords are where PCI DSS compliance often breaks down

CISOs often ask where password managers fit within the PCI DSS language. The standard does not mandate specific technologies, but it defines outcomes that password managers help achieve. Requirement 8 focuses on identifying users and authenticating access. Unique credentials and protection of authentication factors are core expectations. Requirement 12.6 addresses security awareness. Training must reflect real risks and employee responsibilities. Demonstrating that employees are trained to use approved credential management tools strengthens assessment evidence. Self-assessment questionnaires reinforce this operational focus. They ask how credentials are handled, how access is reviewed, and how training is documented, pushing organizations to demonstrate process rather than policy. ... “Security leaders want to know who accessed what and when. That visibility turns password management from a convenience feature into a control.” ... Culture shows up in small choices. Whether employees ask before sharing access. Whether they trust approved tools. Whether security feels like support or friction. PCI DSS 4.x pushes organizations to take those signals seriously. Passwords sit at the center of that shift because they touch every system and every user. Training alone does not change behavior. Tools alone do not create understanding. 


AI Demand and Policy Shifts Redraw Europe’s Data Center Map for 2026

Rising demand for AI, particularly large language models (LLMs) and generative AI, is driving the need for large-scale GPU clusters and advanced infrastructure. The EU's forthcoming Cloud and AI Development Act aims to triple the region's data center processing capacity within five to seven years, with streamlined approvals and public funding for energy-efficient facilities expected to stimulate growth. ... “We expect to see a strategic bifurcation,” Lamb said, with FLAP-D metros continuing to attract latency-sensitive enterprise and inference workloads that require proximity to end users, while large-scale AI training deployments gravitate toward regions with abundant, cost-effective renewable energy. ... Despite abundant renewables and favorable cool conditions, the Nordics have not scaled as quickly as anticipated. Thorpe reported steady but slower growth, citing municipal moratoriums – particularly in Sweden – and lower fiber density. Even so, AI training workloads are renewing interest in Norway and Finland. “The northern part of Norway is a good example,” Thorpe said, noting OpenAI’s planned Stargate facility powered entirely by hydroelectric energy. “They are able to achieve much lower PUE [power usage effectiveness] because of the cooler climate.” ... Meanwhile, stricter energy-efficiency requirements are complicating the planning process.


Top cyber threats to your AI systems and infrastructure

Multiple attack types against AI systems are arising. Some attacks, such as data poisoning, occur during training. Others, such as adversarial inputs, happen during inference. Still others, such as model theft, occur during deployment. ... Here, the attack goes after the model itself, seeking to produce inaccurate results by tampering with the model’s architecture or parameters. Some definitions of model poisoning models also include attacks where the model’s training data has been corrupted through data poisoning. ... “With prompt injection, you can change what the AI agent is supposed to do,” says Fabien Cros ... Model owners and operators use perturbed data to test models for resiliency, but hackers use it to disrupt. In an adversarial input attack, malicious actors feed deceptive data to a model with the goal of making the model output incorrect. ... Like other software systems, AI systems are built with a combination of components that can include open-source code, open-source models, third-party models, and various sources of data. Any security vulnerability in the components can show up in the AI systems. This makes AI systems vulnerable to supply chain attacks, where hackers can exploit vulnerabilities within the components to launch an attack. ... Also called model jailbreaking, attackers’ goal here is to get AI systems — primarily through engaging with LLMs — to disregard the guardrails that confine their actions and behavior, such as safeguards to prevent harmful, offensive, or unethical outputs.


The future of authentication in 2026: Insights from Yubico’s experts

As we look ahead to the future of authentication and identity, 2026 will be a pivotal year as the industry intensifies its focus on the standardization work required to make post-quantum cryptography (PQC) viable at scale as we near a post-quantum future. ... The proven, most effective solution to combat stolen and fake identities is the use of verifiable credentials – specifically, strong authentication combined with digital identity verification. The good news is countries around the world are taking action, with the EU moving forward with a bold plan over the next year: By late December 2026, each Member State must make at least one EUDI wallet available. ... AI's usefulness has rapidly improved over the years, and I anticipate that it will eventually help the general public in a meaningful way. In 2026, the cybersecurity industry should focus more efforts globally on accelerating the adoption of digital content transparency and authenticity standards to help everyone discern fact from fiction and continue the phishing-resistant MFA journey to minimize some of the impact of scams. ... In 2026, there will be a pivotal shift in the digital identity landscape as the industry moves beyond a narrow, consumer-centric focus to one focused on the enterprise. While the public conversation around digital identities has historically centered on consumer-facing scenarios like age verification, the coming year will bring a realisation that robust digital identity truly belongs in the heart of businesses.


7 changes to the CIO role in 2026

As AI transforms how people do their jobs, CIOs will be expected to step up and help lead the effort.
“A lot of the conversations are about implementing AI solutions, how to make solutions work, and how they add value,” says Ryan Downing. “But the reality is with the transformation AI is bringing into the workplace right now, there’s a fundamental change in how everyone will be working.” ... This year, the build or buy decisions for AI will have dramatically bigger impacts than they did before. In many cases, vendors can build AI systems better, quicker, and cheaper than a company can do it themselves. And if a better option comes along, switching is a lot easier than when you’ve built something internally from scratch. ... The key is to pick platforms that have the ability to scale, but are decoupled, he says, so enterprises can pivot quickly, but still get business value. “Right now, I’m prioritizing flexibility,” he says. Bret Greenstein, chief AI officer at management consulting firm West Monroe Partners, recommends CIOs identify aspects of AI that are stable, and those that change rapidly, and make their platform selections accordingly. ... “In the past, IT was one level away from the customer,” he says. “They enabled the technology to help business functions sell products and services. Now with AI, CIOs and IT build the products, because everything is enabled by technology. They go from the notion of being services-oriented to product-oriented.”


Agentic AI scaling requires new memory architecture

To avoid recomputing an entire conversation history for every new word generated, models store previous states in the KV cache. In agentic workflows, this cache acts as persistent memory across tools and sessions, growing linearly with sequence length. This creates a distinct data class. Unlike financial records or customer logs, KV cache is derived data; it is essential for immediate performance but does not require the heavy durability guarantees of enterprise file systems. General-purpose storage stacks, running on standard CPUs, expend energy on metadata management and replication that agentic workloads do not require. The current hierarchy, spanning from GPU HBM (G1) to shared storage (G4), is becoming inefficient ... The industry response involves inserting a purpose-built layer into this hierarchy. The ICMS platform establishes a “G3.5” tier—an Ethernet-attached flash layer designed explicitly for gigascale inference. This approach integrates storage directly into the compute pod. By utilising the NVIDIA BlueField-4 data processor, the platform offloads the management of this context data from the host CPU. The system provides petabytes of shared capacity per pod, boosting the scaling of agentic AI by allowing agents to retain massive amounts of history without occupying expensive HBM. The operational benefit is quantifiable in throughput and energy.

Daily Tech Digest - November 28, 2025


Quote for the day:

"Whenever you find yourself on the side of the majority, it is time to pause and reflect." -- Mark Twain



Security researchers caution app developers about risks in using Google Antigravity

“In Antigravity,” Mindgard argues, “’trust’ is effectively the entry point to the product rather than a conferral of privileges.” The problem, it pointed out, is that a compromised workspace becomes a long-term backdoor into every new session. “Even after a complete uninstall and re-install of Antigravity,” says Mindgard, “the backdoor remains in effect. Because Antigravity’s core intended design requires trusted workspace access, the vulnerability translates into cross-workspace risk, meaning one tainted workspace can impact all subsequent usage of Antigravity regardless of trust settings.” For anyone responsible for AI cybersecurity, says Mindguard, this highlights the need to treat AI development environments as sensitive infrastructure, and to closely control what content, files, and configurations are allowed into them. ... Swanda recommends that app development teams building AI agents with tool-calling: assume all external content is adversarial. Use strong input and output guardrails, including tool calling; Strip any special syntax before processing; implement tool execution safeguards. Require explicit user approval for high-risk operations, especially those triggered after handling untrusted content or other dangerous tool combinations; not rely on prompts for security. System prompts, for example, can be extracted and used by an attacker to influence their attack strategy. 


How AI Is Rewriting The Rules Of Work, Leadership, And Human Potential

When a CEO tells his team, "AI is coming for your jobs, even mine," you pay attention. It is rare to hear that level of blunt honesty from any leader, let alone the head of one of the world's largest freelance platforms. Yet this is exactly how Fiverr co-founder and CEO Micha Kaufman has chosen to guide his company through the most significant technological shift of our lifetimes. His blunt assessment: AI is coming for everyone's jobs, and the only response is to get faster, more curious, and fundamentally better at being human. ... We're applying AI to existing workflows and platforms, seeing improvements, but not yet experiencing the fundamental restructuring that's coming. "It is mostly replacing the things we used to do as human beings, acting as robots," Kaufman observes. The repetitive tasks, the research gathering, the document summarizing, these elements where humans brought judgment but little humanity are being automated first. ... It's not enough to use the obvious AI tools in obvious ways. The real value emerges from those who push boundaries, combine systems creatively, or bring exceptional judgment to AI-assisted workflows. Kaufman points to viral videos created with advanced AI tools, noting that their quality stems not from the AI itself but from the operator's genius, experience, creativity, and taste developed over years.


How ‘digital twins’ could help prevent cyber-attacks on the food industry

A digital twin is a virtual replica of any product, process, or service, capturing its state, characteristics, and connections with other systems throughout its life cycle. The digital twin will include the computer system used by the company. It can help because conventional defences are increasingly out of step with cyber-attacks. Monitoring tools tend to detect anomalies after damage occurs. Complex computer systems can often obscure the origins of breaches. A digital twin creates a bridge between the physical and digital worlds. It allows organisations to simulate real-time events, predict what might happen next, and safely test potential responses. It can also help analyse what happened after a cyber-attack to help companies prepare for future incidents. ... A digital twin might be able to avert disaster under this scenario. By combining operational data such as temperature, humidity, or the speed air of flow with internal computing system data or intrusion attempts, digital twins offer a unified view of both system performance and cybersecurity. They enable organisations to simulate cyber-attacks or equipment failures in a safe, controlled digital environment, revealing vulnerabilities before attackers can exploit them. A digital twin can also detect abnormal temperature patterns, monitor the system for malicious activity, and perform analysis after a cyber-attack to identify the causes.


Why password management defines PCI DSS success

When you dig into real incidents involving payment data, a surprising number come down to poor password hygiene. PCI DSS v4.0 raised the bar for authentication, and the responsibility sits with security leaders to turn those requirements into workable daily habits for users and admins. ... Requirement 8 asks organizations to verify the identity of every user with strong authentication, make sure passwords and passphrases meet defined strength rules, prevent credential reuse, limit attempts, and store credentials securely. Passwords need to be at least 12 characters long, or at least 8 characters when a system cannot support longer strings. These rules line up with guidance from NIST SP 800 63B, which recommends longer passphrases, resistance against common word lists and hashing methods that protect stored secrets. ... PCI DSS requires that access be traceable to an individual and that shared accounts be minimized and controlled. When passwords live across multiple channels, it becomes nearly impossible to show auditors reliable evidence of access history. Even if the team is trying hard, the workflow itself creates gaps that no policy document can fix. ... Some CISOs view password managers as convenience tools. PCI DSS v4.0 shows that they are closer to compliance tools because they make it possible to enforce identity controls across an organization.



AI fluency in the enterprise: Still a ‘horseless carriage’

Companies are tossing AI agents onto existing processes, but a transformative change — where AI is the boss — is still far away. That was the view of IT leaders at this year’s Microsoft Ignite conference who’ve been putting AI agents to work, mostly with legacy processes. The IT leaders discussed their efforts during a conference panel at the event earlier this month. “We’re probably living in some version of the horseless carriage — we haven’t got to the car yet,” said John Whittaker, director of AI platform and products at accounting and consulting firm EY. ... Pfizer is very process-centric, he said, stressing that the goal is not to reinvent processes right out of the gate. The company is analyzing how AI works for them, gaining confidence in the technology before reorganizing processes within the AI lens. “Where we’re definitely heading … is thinking about, ‘I’ve solved this process, I’ve been following exactly the way it exists today. Now let’s blow it up and reimagine it…’ — and that’s exciting,” he said. ... Lumen is now looking at where it wants the business to be in 36 months and linking it to AI agents and AI-native plans. “We’re … working back from that and ensuring that we have the right set of tools, the right set of training, and the right set of agents in order to enable that,” he said. Every new Lumen employee in Alexander’s connected ecosystem group gets a Copilot license. The technology has helped speed up the process of understanding acronyms and historical trends within the company.


Creating Impactful Software Teams That Continuously Improve

When you are a person who prefers your job to be strictly defined, with clear boundaries, then you feel supported instead of stifled by a boss who checks in on you regularly. In the same culture, you will feel relaxed, happy, and content, which will in turn allow you to bring your best to your job and deliver to your strengths, Žabkar Nordberg said. You do not want to have employees who will be extensions of yourself, Žabkar Nordberg said. Instead, you want people who will bring their own thoughts, their own solutions, and in many ways be different and better than yourself. ... Provide guidance, step away, and let people have autonomy within those constraints. You might say something like "I would like you to focus on improving our customer retention. Be aware that legal regulations require all steps in our current onboarding journey to be present, but we have flexibility in how we execute them as the user experience is not prescribed". This gives people guidance and focuses them, but still gives them the autonomy to bring their own experiences and find their own solutions. ... We want people to show initiative and proactively bring their own thoughts, improvements, and worries. Clear communication and an understanding of how people work will help them do that, Žabkar Nordberg said. Psychological safety underlines trust, autonomy, and communication; it is required for them to work effectively, he concluded.


Empathetic policy engineering: The secret to better security behavior and awareness

Insecure behavior is often blamed on users, when the problem often lies in the measure itself. In IT security research, the focus is often on individual user behavior — for example, on whether secure behavior depends on personality traits. The question of how well security measures actually fit the reality of work — that is, how likely they are to be accepted in everyday practice — is neglected. For every threat, there are usually several available security measures. But differences in effort, acceptance, compatibility, or complexity are often not taken into account in practice. Instead, security or IT departments often make decisions based solely on technical aspects. ... Safety measures and guidelines are often communicated in a way that doesn’t resonate with users’ work reality because they don’t aim to engage employees and motivate them: for example, through instructions, standard online training, or overly playful formats like comics that employees don’t take seriously. ... The limited success of many security measures is not solely due to the users — often it’s unrealistic requirements, a lack of involvement, and inadequate communication. For security leaders, this means: Instead of relying on education and sanctions, a strategic paradigm shift is needed. They should become a kind of empathetic policy architect whose security strategy not only works technically but also resonates on a human level.


Agentic AI is not ‘more AI’—it’s a new way of running the enterprise

Agentic AI marks a shift from simply predicting outcomes or offering recommendations to systems that can plan tasks, take actions and learn from the results within defined guardrails. In practical terms, this means moving beyond isolated, single-task copilots towards coordinated “swarms” of agents that continually monitor signals, trigger workflows across systems, negotiate constraints and complete loops with measurable outcomes. ... A major barrier is trust and control. Leaders remain cautious about allowing software to take autonomous actions. Graduated autonomy provides a path forward: beginning with assistive tools, moving to supervised autonomy with reversible actions and eventually deploying narrow, fully autonomous loops when KPIs and rollback mechanisms have been validated. Lack of clarity on value is another obstacle. Impressive demonstrations do not constitute a strategy. Organisations should use a jobs-to-be-done perspective and tie each agent to a specific financial or risk objective, such as days-sales-outstanding, mean time to resolution, inventory turns or claims leakage. Analysts have warned that many agentic initiatives will be cancelled if value remains vague, so clear scorecards and time-boxed proofs of value are essential. Data readiness is a further challenge. Weak lineage, uncertain ownership and inconsistent quality stop AI scaling efforts in their tracks.


6 strategies for CIOs to effectively manage shadow AI

“Be clear which tools and platforms are approved and which ones aren’t,” he says. “Also be clear which scenarios and use cases are approved versus not, and how employees are allowed to work with company data and information when using AI like, for example, one-time upload as opposed to cut-and-paste or deeper integration.” ... “The most important thing is creating a culture where employees feel comfortable sharing what they use rather than hiding it,” says Fisher. His team combines quarterly surveys with a self-service registry where employees log the AI tools they use. IT then validates those entries through network scans and API monitoring. ... “Effective inventory management requires moving beyond periodic audits to continuous, automated visibility across the entire data ecosystem,” he says, adding that good governance policies ensure all AI agents, whether approved or built into other tools, send their data in and out through one central platform. ... “Risk tolerance should be grounded in business value and regulatory obligation,” says Morris. Like Fisher, Morris recommends classifying AI use into clear categories, what’s permitted, what needs approval, and what’s prohibited, and communicating that framework through leadership briefings, onboarding, and internal portals. ... Transparency is the key to managing shadow AI well. Employees need to know what’s being monitored and why.


It’s Time to Rethink Access Control for Modern Development Environments

When faced with the time-consuming complexity of managing granular permissions across dozens of development tools, most VPs of Engineering and CTOs opt for the path of least resistance, granting broad administrative privileges to entire engineering teams. It’s understandable from a productivity standpoint; nobody wants to be a bottleneck when a critical release is imminent, or explain to the CEO why they missed a market window because a developer couldn’t access a repository. However, when everyone has admin privileges, attackers who gain access to just one set of credentials can do tremendous damage. They gain not just access to sensitive code and data, but the ability to manipulate build processes, insert malicious code, or establish persistent backdoors. This problem becomes even more dangerous when combined with the prevalence of shadow IT, non-human identities, and contractor relationships operating outside your security perimeter. ... The answer to stronger security that doesn’t hinder developer productivity lies in implementing just-in-time permissioning within the SDLC, a concept successfully adopted from cloud infrastructure management that can transform how we handle development access controls. The approach is straightforward: instead of granting permanent administrative access to everyone, take 90 days to observe what developers actually need to do their jobs, then right-size their permissions accordingly. 

Daily Tech Digest - September 02, 2024

AI Demands More Than Just Technical Skills From Developers

Unlike in the past, when developers took instructions from a team lead and executed tasks as individual contributors, now they’re outsourcing problem-solving and code generation to AI tools and models. By partnering with GenAI to solve complex problems, developers who were once individual contributors are now becoming team leads in their own right. This new workflow requires developers to elevate their critical-thinking skills and empathy for end-users. No longer can they afford to operate with a superficial understanding of the task at hand. Now, it’s paramount that developers understand the why that is driving their initiative so that they can lead their AI counterparts to the most desirable outcomes. ... Developers are now co-creating IP. Who owns the IP? Does the prompt engineer? Does the GenAI tool? If developers write code with a certain tool, do they own that code? In an industry where tool sets are moving so quickly, it varies based on what tool you’re using, what version of the tool, and what different tools within certain vendors even have different rules. Intellectual property rights are evolving.


Embracing Neurodiversity in IT Workplace to Bridge Talent Gaps

To accommodate neurodiversity effectively, organizations must adopt a multifaceted approach. This includes providing tailored support and resources to neurodiverse employees, such as flexible work arrangements, assistive technologies, and specialized training programs. Additionally, fostering open communication and creating a supportive network of colleagues and mentors can help neurodiverse individuals feel valued and empowered to contribute their unique insights and perspectives. ... The first step, according to Leantime CEO and co-founder Gloria Folaron, is to create a cultural expectation of self-awareness — from leadership to human resources. "The self-awareness can extend across any biases you might have, relationships, or negative experiences or reactions that exist inside. It's a self-checking mechanism," she said. The second benefit of this is that, for many neurodivergent individuals, they have not been well-supported in the past — they've been forced to create their own systems to fit into more traditional work environments. By promoting even employee-level self-awareness, they become empowered to start thinking about their own needs.


Ransomware recovery: 8 steps to successfully restore from backup

Use either physical write-once-read-many (WORM) technology or virtual equivalents that allow data to be written but not changed. This does increase the cost of backups since it requires substantially more storage. Some backup technologies only save changed and updated files or use other deduplication technology to keep from having multiple copies of the same thing in the archive. ... In addition to keeping the backup files themselves safe from attackers, companies should also ensure that their data catalogs are safe. “Most of the sophisticated ransomware attacks target the backup catalog and not the actual backup media, the backup tapes or disks, as most people think,” says Amr Ahmed, EY America’s infrastructure and service resiliency leader. This catalog contains all the metadata for the backups, the index, the bar codes of the tapes, the full paths to data content on disks, and so on. “Your backup media will be unusable without the catalog,” Ahmed says. Restoring without one would be extremely hard or impractical. Enterprises need to ensure that they have in place a backup solution that includes protections for the backup catalog, such as an air gap.


Complying with PCI DSS requirements by 2025

Perhaps one of the most significant changes in terms of preventing e-commerce fraud is the requirement to deploy change-and-tamper-detection mechanisms to alert for unauthorized modifications to the HTTP headers and the contents of payment pages as received by the consumer browser (11.6.1). Most e-commerce-related cardholder data (CHD) theft comes from the abuse of JavaScript used within online stores (otherwise known as web-based skimming). Recent research has shown that most website payment pages have 100 different scripts, some of which come from the merchant itself and some from third parties, and any one of these scripts can potentially be altered to harvest cardholder data. Equally, this could be the payment page of a payment service provider (PSP) which a merchant redirects to, or uses a PSP generated inline frame (iframe), making this an issue that is also relevant to PSPs. The ideal scenario is to reduce this risk by knowing what is in use, what is authorized and has not been altered, which is the principle aim of requirement 6.4.3. This mandates the inventory of scripts, their authorization, evidence that they are necessary and have been validated.


Inside CISA's Unprecedented Election Security Mission

Despite ongoing efforts by foreign adversaries to influence U.S. elections, attempts to subvert the vote have been largely unsuccessful in past elections. CISA's continued expansion of advanced threat detection and response strategies in 2016 and 2020 played a significant role in thwarting attempts by Russia and others to compromise the integrity of the electoral process. The agency has recently issued warnings about "increasingly aggressive Iranian activity during this election cycle," including reported activities to compromise former President Donald Trump's campaign. The Department of Homeland Security designated election infrastructure as a subset of the government facilities sector in 2017, further recognizing the vast networks of voter registration databases, information technology systems, polling places and voting systems as critical infrastructure. ... The agency over the last six years has rolled out a wide range of no-cost voluntary services and resources aimed at reducing risks to election infrastructure, including vulnerability scanning, physical security assessments and supporting the nationwide adoption of .gov domains, which experts say enhance trust by ensuring that election information is verified and comes from official, credible sources.


The Gen Z Guide to Getting Ahead at Work

As a young person entering the workplace with new ideas and fresh eyes and perspectives, you have unique value, experts said. Don't be shy to share your thoughts. You might know something others don't. That could look like sharing tools or shortcuts you know within apps, ideas or stories about how you've solved problems in the past, Paaras said. You might have valuable experience related to a particular topic or insight into how other people your age see things. Or you might be able to spot the inefficiency or error of how things are regularly done. "You're seeing things for the first time, and you can highlight that," Abrahams said. "Focus on the value you bring." ... Set time aside for chatting, by video or in person, with your colleagues and supervisor. Building good relationships can help foster people's trust and willingness to collaborate with you. It also could be a differentiator in your career advancement. "Your presence needs to be felt by others," Wilk said. Seek out one-on-one meetings and casual conversations. Be ready with thoughts, questions and goals for the conversation, Wilk said. When in doubt, remember people love to talk about themselves, she added. Ask them about their career or experience on the job.


Unified Data: The Missing Piece to the AI Puzzle

“A unified data strategy can significantly reduce the time data scientists spend on accessing, re-formatting, or creating data, thereby improving their effectiveness in developing AI models,” Francis says. Yaad Oren, managing director of SAP Labs US and global head of SAP BTP innovation, explains that incorporating AI across an organization is not possible without trusted and governed data. “A unified data strategy simplifies the data landscape, maintains data context and ensures accurate training of AI models,” he says. This leads to more effective AI deployments and allows customers to harness data to drive deeper insights, faster growth, and more efficiency. “A unified date architecture is crucial for creating a holistic view of business operations and avoiding the ramifications of flawed AI,” he adds. By bringing together disparate data from across the business, a data architecture ensures data context is kept intact, providing a picture of how the data was generated, where it resides, when it was created, and who it relates to. “A strategy that incorporates a data architecture empowers users to access and use data in real time, creating a single source of truth for decision making, and automating data management processes,” Oren explains.


The Next Business Differentiator: 3 Trends Defining The GenAI Market

Different industries have distinct needs and like with cloud, standardized or general GenAI models and services can’t support the specialized requirements of specific industries. This is especially true for regulated industries that have stringent governance, risk and compliance standards — industry or domain-specific GenAI models will help organizations comply with regulations and compliance standards, ensuring data security and ethical considerations are adhered to. ... The main reason for prioritizing responsible AI is to mitigate bias. Mitigating bias is fundamental in delivering GenAI solutions that have true market applicability and relevance. Ultimately, bias comes from three areas; algorithms, data and humans. Bias from AI algorithms has plummeted exponentially in the last decade. Today, algorithms are mostly trustworthy and the biggest source of bias in AI comes from data and humans. When it comes to data, bias exists because of a lack of quality and variety, as well as often incomplete datasets used to train the algorithm. With humans, there is an inherent lack of trust when it comes to AI, whether because of reported threats to people’s livelihoods or due to AI hallucinating certain information.


Miniaturized brain-machine interface processes neural signals in real time

The MiBMI's small size and low power are key features, making the system suitable for implantable applications. Its minimal invasiveness ensures safety and practicality for use in clinical and real-life settings. It is also a fully integrated system, meaning that the recording and processing are done on two extremely small chips with a total area of 8mm2. This is the latest in a new class of low-power BMI devices developed at Mahsa Shoaran's Integrated Neurotechnologies Laboratory (INL) at EPFL's IEM and Neuro X institutes. "MiBMI allows us to convert intricate neural activity into readable text with high accuracy and low power consumption. This advancement brings us closer to practical, implantable solutions that can significantly enhance communication abilities for individuals with severe motor impairments," says Shoaran. Brain-to-text conversion involves decoding neural signals generated when a person imagines writing letters or words. In this process, electrodes implanted in the brain record neural activity associated with the motor actions of handwriting. The MiBMI chipset then processes these signals in real time, translating the brain's intended hand movements into corresponding digital text. 


From Transparency to the Perils of Oversharing

While openness fosters collaboration and trust, oversharing can inadvertently lead to micromanagement, misinterpretation, and a loss of trust, undermining the foundations of a healthy team dynamic. ... Transparency without trust can create a blame culture where team members feel exposed to criticism for every minor mistake. This effect can result in individuals trying to cover their tracks or avoid taking risks, undermining the very principles of Agile. Decision paralysis: When too much transparency leads to stakeholders or managers second-guessing every team decision, it can create decision paralysis. The team may feel that every move is under a microscope, leading them to slow down or become overly cautious, eroding the trust that they can make decisions independently. ... It’s not just the team that needs to manage transparency effectively; stakeholders also need guidance on interpreting the information they receive. Educating stakeholders on Agile practices and the purpose of various metrics can prevent misinterpretation and unnecessary interference. In other words, run workshops for stakeholders on interpreting data and information from your team.



Quote for the day:

"Success is the progressive realization of predetermined, worthwhile, personal goals." -- Paul J. Meyer

Daily Tech Digest - April 26, 2024

Counting the Cost: The Price of Security Neglect

In the perfect scenario, the benefits of a new security solution will reduce the risk of a cyberattack. But, it’s important to invest with the right security vendor. Any time a vendor has access to a company’s systems and data, that company must assess whether the vendor’s security measures are sufficient. The recent Okta breach highlights the significant repercussions of a security vendor breach on its customers. Okta serves as an identity provider for many organizations, enabling single sign-on (SSO). An attacker gaining access to Okta’s environment could potentially compromise user accounts of Okta customers. Without additional access protection layers, customers may become vulnerable to hackers aiming to steal data, deploy malware, or carry out other malicious activities. When evaluating the privacy risks of security investments, it’s important to consider an organization’s security track record and certification history. ... Security and privacy leaders can bolster their case for additional investments by highlighting costly data breaches, and can tilt the scale in their favor by seeking solutions with strong records in security, privacy, and compliance.


Is Your Test Suite Brittle? Maybe It’s Too DRY

DRY in test code often presents a similar dilemma. While excessive duplication can make tests lengthy and difficult to maintain, misapplying DRY can lead to brittle test suites. Does this suggest that the test code warrants more duplication than the application code? A common solution to brittle tests is to use the DAMP acronym to describe how tests should be written. DAMP stands for "Descriptive and Meaningful Phrases" or "Don’t Abstract Methods Prematurely." Another acronym (we love a good acronym!) is WET: "Write Everything Twice," "Write Every Time," "We Enjoy Typing," or "Waste Everyone’s Time." The literal definition of DAMP has good intention - descriptive, meaningful phrases and knowing the right time to extract methods are essential when writing software. However, in a more general sense, DAMP and WET are opposites of DRY. The idea can be summarized as follows: Prefer more duplication in tests than you would in application code. However, the same concerns of readability and maintainability exist in application code as in test code. Duplication of concepts causes the same problems of maintainability in test code as in application code.


PCI Launches Payment Card Cybersecurity Effort in the Middle East

The PCI SSC plans to work closely with any organization that handles payments within the Middle East payment ecosystem, with a focus on security, says Nitin Bhatnagar, PCI Security Standards Council regional director for India and South Asia, who will now also oversee efforts in the Middle East. "Cyberattacks and data breaches on payment infrastructure are a global problem," he says. "Threats such as malware, ransomware, and phishing attempts continue to increase the risk of security breaches. Overall, there is a need for a mindset change." The push comes as the payment industry itself faces significant changes, with alternatives to traditional payment cards taking off, and as financial fraud has grown in the Middle East. ... The Middle East is one region where the changes are most pronounced. Middle East consumers prefer digital wallets to cards, 60% to 27%, as their most preferred method of payment, while consumers in the Asia-Pacific region slightly prefer cards, 43% to 38%, according to an August 2021 report by consultancy McKinsey & Company.


4 ways connected cars are revolutionising transportation

Connected vehicles epitomize the convergence of mobility and data-driven technology, heralding a new era of transportation innovation. As cars evolve into sophisticated digital platforms, the significance of data management and storage intensifies. The storage industry must remain agile, delivering solutions that cater to the evolving needs of the automotive sector. By embracing connectivity and harnessing data effectively, stakeholders can unlock new levels of safety, efficiency, and innovation in modern transportation. ... Looking ahead, connected cars are poised to transform transportation even further. As vehicles become more autonomous and interconnected, the possibilities for innovation are limitless. Autonomous driving technologies will redefine personal mobility, enabling efficient and safe transportation solutions. Data-driven services will revolutionise vehicle ownership, offering personalised experiences tailored to individual preferences. Furthermore, the integration of connected vehicles with smart cities will pave the way for sustainable and efficient urban transportation networks.


Looking outside: How to protect against non-Windows network vulnerabilities

Security administrators running Microsoft systems spend a lot of time patching Windows components, but it’s also critical to ensure that you place your software review resources appropriately – there’s more out there to worry about than the latest Patch Tuesday. ... Review the security and patching status of your edge, VPN, remote access, and endpoint security. Each of these endpoint software has been used as an entryway into many governments and corporate networks. Be prepared to immediately patch and or disable any of these software tools at a moment’s notice should the need arise. Ensure that you have a team dedicated to identifying and tracking resources to help alert you to potential vulnerabilities and attacks. Resources such as CISA can keep you alerted as can making sure you are signed up for various security and vendor alerts as well as having staff that are aware of the various security discussions online. These edge devices and software should always be kept up to date and you should review life cycle windows as well as newer technology and releases that may decrease the number of emergency patching sessions your Edge team finds themselves in.


Application Delivery Controllers: A Key to App Modernization

As the infrastructure running our applications has grown more complex, the supporting systems have evolved to be more sophisticated. Load balancers, for example, have been largely superseded by application delivery controllers (ADCs). These devices are usually placed in a data center between the firewall and one or more application servers, an area known as the demilitarized zone (DMZ). While first-generation ADCs primarily handled application acceleration and load balancing between servers, modern enterprise ADCs have considerably expanded capabilities and have evolved into feature-rich platforms. Modern ADCs include such capabilities as traffic shaping, SSL/TLS offloading, web application firewalls (WAFs), DNS, reverse proxies, security analytics, observability and more. They have also evolved from pure hardware form factors to a mixture of hardware and software options. One leader of this evolution is NetScaler, which started more than 20 years ago as a load balancer. In the late 1990s and early 2000s, it handled the majority of internet traffic. 


Curbing shadow AI with calculated generative AI deployment

IT leaders countered by locking down shadow IT or making uneasy peace with employees consuming their preferred applications and compute resources. Sometimes they did both. Meanwhile, another unseemly trend unfolded, first slowly, then all at once. Cloud consumption became unwieldy and costly, with IT shooting itself in the foot with misconfigurations and overprovisioning among other implementation errors. As they often do when investment is measured versus business value, IT leaders began looking for ways to reduce or optimize cloud spending. Rebalancing IT workloads became a popular course correction as organizations realized applications may run better on premises or in other clouds. With cloud vendors backtracking on data egress fees, more IT leaders have begun reevaluating their positions. Make no mistake: The public cloud remains a fine environment for testing and deploying applications quickly and scaling them rapidly to meet demand. But it also makes organizations susceptible to unauthorized workloads. The growing democratization of AI capabilities is an IT leader’s governance nightmare. 


CIOs eager to scale AI despite difficulty demonstrating ROI, survey finds

“Today’s CIOs are working in a tornado of innovation. After years of IT expanding into non-traditional responsibilities, we’re now seeing how AI is forcing CIOs back to their core mandate,” Ken Wong, president of Lenovo’s solutions and services group, said in a statement. There is a sense of urgency to leverage AI effectively, but adoption speed and security challenges are hindering efforts. Despite the enthusiasm for AI’s transformative potential, which 80% of CIOs surveyed believe will significantly impact their businesses, the path to integration is not without its challenges. Notably, large portions of organizations are not prepared to integrate AI swiftly, which impacts IT’s ability to scale these solutions. ... IT leaders also face the ongoing challenge of demonstrating and calculating the return on investment (ROI) of technology initiatives. The Lenovo survey found that 61% of CIOs find it extremely challenging to prove the ROI of their tech investments, with 42% not expecting positive ROI from AI projects within the next year. One of the main difficulties is calculating ROI to convince CFOs to approve budgets, and this challenge is also present when considering AI adoption, according to Abhishek Gupta, CIO of Dish TV. 


AI Bias and the Dangers It Poses for the World of Cybersecurity

Without careful monitoring, these biases could delay threat detection, resulting in data leakage. For this reason, companies combine AI’s power with human intelligence to reduce the bias factor shown by AI. The empathy element and moral compass of human thinking often prevent AI systems from making decisions that could otherwise leave a business vulnerable. ... The opposite could also occur, as AI could label a non-threat as malicious activity. This could lead to a series of false positives that cannot even be detected from within the company. ... While some might argue that this is a good thing because supposedly “the algorithm works,” it could also lead to alert fatigue. AI threat detection systems were added to ease the workload in the human department, reducing the number of alerts. However, the constant red flags could cause more work for human security providers, giving them more tickets to solve than they originally had. This could lead to employee fatigue and human error and take away the attention from actual threats that could impact security.


The Peril of Badly Secured Network Edge Devices

The biggest risks involved anyone using internet-exposed Cisco Adaptive Security Appliance devices, who were five times more likely than non-ASA users to file a claim. Users of internet-exposed Fortinet devices were twice as likely to file a claim. Another risk comes in the form of Remote Desktop Protocol. Organizations with internet-exposed RDP filed 2.5 times as many claims as organizations without it, Coalition said. Mass scanning by attackers, including initial access brokers, to detect and exploit poorly protected RDP connections remains rampant. The sheer quantity of new vulnerabilities coming to light underscores the ongoing risks network edge devices pose. ... Likewise for Cisco hardware: "Several critical vulnerabilities impacting Cisco ASA devices were discovered in 2023, likely contributing to the increased relative frequency," Coalition said. In many cases, organizations fail to patch these vulnerabilities, leaving them at increased risk, including by attackers targeting the Cisco AnyConnect vulnerability, designated as CVE-2020-3259, which the vendor first disclosed in May 2020.



Quote for the day:

"Disagree and commit is a really important principle that saves a lot of arguing." -- Jeff Bezos

Daily Tech Digest - September 16, 2022

The AI-First Future of Open Source Data

If we take it one step further from the GPL for data, we begin to see the value equation of data, or “the data-in-to-data-out ratio” as Augustin calls it. He uses the example of why people are so willing to give up parts of their data and privacy to websites because the small amount of data they’re handing over returns greater value back to them. Augustin sees the data-in-to-data-out ratio as a tipping point in open source data. Calling it one of his application principles, Augustin suggests that data engineers should focus on providing users with more value but take less and less information from them. He also wants to figure out a way never to ask your users for anything. You’re only providing them an advantage. For example, new app users will always be asked for information. But how can we skip that step and collect data directly in exchange for providing value? “Most people are willing to [give up data] because they get a lot of utility back. Think about the ratio of how much you put in versus how much you get back. You get back an awful lot. People are willing to give up so much of their personal information because they get a lot back,” he says.


How User Interface Testing Can Fit into the CI/CD Pipeline

Reliance on manual testing is why organizations can’t successfully implement CI/CD. If CI/CD involves manual processes that cannot be sustained as it slows down the entire delivery cycle. Testing is no longer the sole responsibility of developers or testers only and it takes investment and integration in infrastructure. Developer teams need to focus on building the coverage that is essential. They should focus on testing workflows and not features to be more efficient. Additionally, manual testers who are not developers can still be part of the process, provided that they use a testing tool that gives them the required automation capabilities in a low code environment. For example, with Telerik Test Studio, a manual tester can create an automated test by interacting with the application’s UI in a browser. That test can be presented in a way without code and as a result, they can learn how the code behaves. Another best practice in making UI testing efficient is to be selective with what is included in the pipeline. 


Want to change a dysfunctional culture? Intel’s Israel Development Center shows how

Intel’s secret weapon, one that until recently it did not talk about much, is its Israel Development Center. It is the largest employer in Israel, a nation surrounded by hostile countries, and women and men are treated more equally than in most other countries I’ve studied. They are highly supportive of each other, making it an incredibly supportive country for women in a wide variety of industries. The facility itself is impressively large and well-built and eclipses Intel’s corporate office in both size and security. The work done there really defines Intel’s historic success in both product performance and quality, making it an example of how a company should be run. Surprisingly, the collaborative and supportive country culture overrode the hostile and self-destructive corporate culture that has defined the US tech industry. What Gelsinger has done is showcase the development center as a template for the rest of Intel, as a firm more tolerant of failure, more supportive of women and focused like a laser on product quality, performance and caring for Intel’s customers.


Uber security breach 'looks bad', potentially compromising all systems

While it was unclear what data the ride-sharing company retained, he noted that whatever it had most likely could be accessed by the hacker, including trip history and addresses. Given that everything had been compromised, he added that there also was no way for Uber to confirm if data had been accessed or altered since the hackers had access to logging systems. This meant they could delete or alter access logs, he said. In the 2016 breach, hackers infiltrated a private GitHub repository used by Uber software engineers and gained access to an AWS account that managed tasks handled by the ride-sharing service. It compromised data of 57 million Uber accounts worldwide, with hackers gaining access to names, email addresses, and phone numbers. Some 7 million drivers also were affected, including details of more than 600,000 driver licenses. Uber later was found to have concealed the breach for more than a year, even resorting to paying off hackers to delete the information and keep details of the breach quiet.


What Is GPS L5, and How Does It Improve GPS Accuracy?

L5 is the most advanced GPS signal available for civilian use. Although it’s primarily meant for life-critical and high-performance applications, such as helping aircraft navigate, it’s available for everyone, like the L1 signal. So the manufacturers of mass-market consumer devices such as smartphones, fitness trackers, in-car navigation systems, and smartwatches are integrating it into their devices to offer the best possible GPS experience. One of the key advantages that the L5 signal possesses is that it uses the 1176.45MHz radio frequency, which is reserved for aeronautical navigation worldwide. As such, it doesn’t have to worry about interference from any other radio wave traffic in this frequency, such as television broadcasts, radars, and any ground-based navigation aids. With L5 data, your device can access more advanced methods to determine which signals have less error and effectively pinpoint the location. It’s particularly helpful at areas where GPS signal can be received but is severely degraded.


Digital transformation: How to get buy-in

Today’s IT leader has to be much more than tech-savvy, they have to be business-savvy. IT leaders of today are expected to identify and build support for transformational growth, even when it’s not popular. At Clarios, I included “Challenge the Status Quo, Be a Respectful Activist” to our IT guiding principles, knowing that around any CEO or general manager’s table they need one or two disruptors – IT leaders should be one. However, once that activist IT leader sells their vision to the boss, now they have to drive change in their peers and the entire organization, without formal authority. ... Our IT leaders can gain buy-in on new ideas by actively listening to our business partners. Our focus is to understand from their perspective the challenges impeding their work by rounding in our hospital locations to see first-hand the issues. So when we propose solutions, it is from their perspective. Utilizing these practices, we can bring forth the vision of Marshfield Clinic Health System because we can implement technology that bridges human interaction between our patients and care teams, which is at the heart of healthcare.


How to Prepare for New PCI DSS 4.0 Requirements

There are several impactful changes to the requirements associated with DSS v4.0 compliance, ranging from policy development (all changes will require some level of policy changes), to Public Key Infrastructure (PKI), as there will be multiple changes related to how keys and certificates are managed. Carroll points out there will also be remote access issues, including defined changes to how systems may be accessed remotely, and risk assessments -- now required to multiple and regular “targeted risk assessments” to capture risk in a format specified by the PCI DSS. Dan Stocker, director at Coalfire, a provider of cybersecurity advisory services, points out fintech is growing rapidly, with innovative uses for credit card data. “Entities should realistically evaluate their obligations under PCI," he says. “Use of descoping techniques, such as tokenization, can reduce total cost of compliance, but also limit product development choices.” He explains modern enterprises have multiple compliance obligations across diverse topics, such as financial reporting, privacy, and in the case of service providers, many more.


Building Large-Scale Real-Time JSON Applications

A critical part of building large-scale JSON applications is to ensure the JSON objects are organized efficiently in the database for optimal storage and access. Documents may be organized in the database in one or more dedicated sets (tables), over one or more namespaces (databases) to reflect ingest, access and removal patterns. Multiple documents may be grouped and stored in one record either in separate bins (columns) or as sub-documents in a container group document. Record keys are constructed as a combination of the collection-id and the group-id to provide fast logical access as well as group-oriented enumeration of documents. For example, the ticker data for a stock can be organized in multiple records with keys consisting of the stock symbol (collection-id) + date (group-id). Multiple documents can be accessed using either a scan with a filter expression (predicate), a query on a secondary index, or both. A filter expression consists of the values and properties of the elements in JSON. For example, an array larger than a certain size or value is present in a sub-tree. A secondary index defined on a basic or collection type provides fast value-based queries described below.


Digital self defense: Is privacy tech killing AI?

AI needs data. Lots of it. The more data you can feed a machine learning algorithm, the better it can spot patterns, make decisions, predict behaviours, personalise content, diagnose medical conditions, power smart everything, detect cyber threats and fraud; indeed, AI and data make for a happy partnership: “The algorithm without data is blind. Data without algorithms is dumb.” Even so, some digital self defense maybe in order. But AI is at risk. Not everyone wants to share, at least, not under the current rules of digital engagement. Some individuals disengage entirely, becoming digital hermits. Others proceed with caution, using privacy-enhancing technologies (PETs) to plug the digital leak: a kind karate chop, digital self defense — they don’t trust website privacy notices, they verify them with tools like DuckDuckGo’s Privacy Grade extension and soon, machine-readable privacy notices. They don’t tell companies their preferences; they enforce them with dedicated tools, and search anonymously using AI-powered privacy-protective search engines and browsers like Duck Duck Go, Brave and Firefox. 


Why Mutability Is Essential for Real-Time Data Analytics

At Facebook, we built an ML model that scanned all-new calendar events as they were created and stored them in the event database. Then, in real-time, an ML algorithm would inspect this event and decide whether it is spam. If it is categorized as spam, then the ML model code would insert a new field into that existing event record to mark it as spam. Because so many events were flagged and immediately taken down, the data had to be mutable for efficiency and speed. Many modern ML-serving systems have emulated our example and chosen mutable databases. This level of performance would have been impossible with immutable data. A database using copy-on-write would quickly get bogged down by the number of flagged events it would have to update. If the database stored the original events in Partition A and appended flagged events to Partition B, this would require additional query logic and processing power, as every query would have to merge relevant records from both partitions. Both workarounds would have created an intolerable delay for our Facebook users, heightened the risk of data errors, and created more work for developers and/or data engineers.



Quote for the day:

"Leadership and learning are indispensable to each other." -- John F. Kennedy