Showing posts with label algorithms. Show all posts
Showing posts with label algorithms. Show all posts

Daily Tech Digest - February 13, 2026


Quote for the day:

"If you want teams to succeed, set them up for success—don’t just demand it." -- Gordon Tredgold



Hackers turn bossware against the bosses

Huntress discovered two incidents using this tactic, one late in January and one early this month. Shared infrastructure, overlapping indicators of compromise, and consistent tradecraft across both cases make Huntress strongly believe a single threat actor or group was behind this activity. ... CSOs must ensure that these risks are properly catalogued and mitigated,” he said. “Any actions performed by these agents must be monitored and, if possible, restricted. The abuse of these systems is a special case of ‘living off the land’ attacks. The attacker attempts to abuse valid existing software to perform malicious actions. This abuse is often difficult to detect.” ... Huntress analyst Pham said to defend against attacks combining Net Monitor for Employees Professional and SimpleHelp, infosec pros should inventory all applications so unapproved installations can be detected. Legitimate apps should be protected with robust identity and access management solutions, including multi-factor authentication. Net Monitor for Employees should only be installed on endpoints that don’t have full access privileges to sensitive data or critical servers, she added, because it has the ability to run commands and control systems. She also noted that Huntress sees a lot of rogue remote management tools on its customers’ IT networks, many of which have been installed by unwitting employees clicking on phishing emails. This points to the importance of security awareness training, she said. 


Why secure OT protocols still struggle to catch on

“Simply having ‘secure’ protocol options is not enough if those options remain too costly, complex, or fragile for operators to adopt at scale,” Saunders said. “We need protections that work within real-world constraints, because if security is too complex or disruptive, it simply won’t be implemented.” ... Security features that require complex workflows, extra licensing, or new infrastructure often lose out to simpler compensating controls. Operators interviewed said they want the benefits of authentication and integrity checks, particularly message signing, since it prevents spoofing and unauthorized command execution. ... Researchers identified cost as a primary barrier to adoption. Operators reported that upgrading a component to support secure communications can cost as much as the original component, with additional licensing fees in some cases. Costs also include hardware upgrades for cryptographic workloads, training staff, integrating certificate management, and supporting compliance requirements. Operators frequently compared secure protocol deployment costs with segmentation and continuous monitoring tools, which they viewed as more predictable and easier to justify. ... CISA’s recommendations emphasize phased approaches and operational realism. Owners and operators are advised to sign OT communications broadly, apply encryption where needed for sensitive data such as passwords and key exchanges, and prioritize secure communication on remote access paths and firmware uploads.


SaaS isn’t dead, the market is just becoming more hybrid

“It’s important to avoid overgeneralizing ‘SaaS,’” Odusote emphasized . “Dev tools, cybersecurity, productivity platforms, and industry-specific systems will not all move at the same pace. Buyers should avoid one-size-fits-all assumptions about disruption.” For buyers, this shift signals a more capability-driven, outcomes-focused procurement era. Instead of buying discrete tools with fixed feature sets, they’ll increasingly be able to evaluate and compare platforms that are able to orchestrate agents, adapt workflows, and deliver business outcomes with minimal human intervention. ... Buyers will likely have increased leverage in certain segments due to competitive pressure among new and established providers, Odusote said. New entrants often come with more flexible pricing, which obviously is an attraction for those looking to control costs or prove ROI. At the same time, traditional SaaS leaders are likely to retain strong positions in mission-critical systems; they will defend pricing through bundled AI enhancements, he said. So, in the short term, buyers can expect broader choice and negotiation leverage. “Vendors can no longer show up with automatic annual price increases without delivering clear incremental value,” Odusote pointed out. “Buyers are scrutinizing AI add-ons and agent pricing far more closely.”


When algorithms turn against us: AI in the hands of cybercriminals

Cybercriminals are using AI to create sophisticated phishing emails. These emails are able to adapt the tone, language, and reference to the person receiving it based on the information that is publicly available about them. By using AI to remove the red flag of poor grammar from phishing emails, cybercriminals will be able to increase the success rate and speed with which the stolen data is exploited. ... An important consideration in the arena of cyber security (besides technical security) is the psychological manipulation of users. Once visual and audio “cues” can no longer be trusted, there will be an erosion of the digital trust pillar. The once-recognizable verification process is now transforming into multi-layered authentication which expands the amount of time it takes to verify a decision in a high-pressure environment. ... AI’s misuse is a growing problem that has created a paradox. Innovation cannot stop (nor should it), and AI is helping move healthcare, finance, government and education forward. However, the rate at which AI has been adopted has surpassed the creation of frameworks and/or regulations related to ethics or security. As a result, cyber security needs to transition from a reactive to a predictive stance. AI must be used to not only react to attacks, but also anticipate future attacks. 


Those 'Summarize With AI' Buttons May Be Lying to You

Put simply, when a user visits a rigged website and clicks a "Summarize With AI" button on a blog post, they may unknowingly trigger a hidden instruction embedded in the link. That instruction automatically inserts a specially crafted request into the AI tool before the user even types anything. ... The threat is not merely theoretical. According to Microsoft, over a 60-day period, it observed 50 unique instances of prompt-based AI memory poisoning attempts for promotional purposes. ... AI recommendation poisoning is a sort of drive-by technique with one-click interaction, he notes. "The button will take the user — after the click — to the AI domain relevant and specific for one of the AI assistants targeted," Ganacharya says. To broaden the scope, an attacker could simply generate multiple buttons that prompt users to "summarize" something using the AI agent of their choice, he adds. ... Microsoft had some advice for threat hunting teams. Organizations can detect if they have been affected by hunting for links pointing to AI assistant domains and containing prompts with certain keywords like "remember," "trusted source," "in future conversations," and "authoritative source." The company's advisory also listed several threat hunting queries that enterprise security teams can use to detect AI recommendation poisoning URLs in emails and Microsoft Teams Messages, and to identify users who might have clicked on AI recommendation poisoning URLs.


EU Privacy Watchdogs Pan Digital Omnibus

The commission presented its so-called "Digital Omnibus" package of legal changes in November, arguing that the bloc's tech rules needed streamlining. ... Some of the tweaks were expected and have been broadly welcomed, such as doing away with obtrusive cookie consent banners in many cases, and making it simpler for companies to notify of data breaches in a way that satisfies the requirements of multiple laws in one go. But digital rights and consumer advocates are reacting furiously to an unexpected proposal for modifying the General Data Protection Regulation. ... "Simplification is essential to cut red tape and strengthen EU competitiveness - but not at the expense of fundamental rights," said EDPB chair Anu Talus in the statement. "We strongly urge the co-legislators not to adopt the proposed changes in the definition of personal data, as they risk significantly weakening individual data protection." ... Another notable element of the Digital Omnibus is the proposal to raise the threshold for notifying all personal data breaches to supervisory authorities. As the GDPR currently stands, organizations must notify a data protection authority within 72 hours of becoming aware of the breach. If amended as the commission proposes, the obligation would only apply to breaches that are "likely to result in a high risk" to the affected people's rights - the same threshold that applies to the duty to notify breaches to the affected data subjects themselves - and the notification deadline would be extended to 96 hours.


The Art of the Comeback: Why Post-Incident Communication is a Secret Weapon

Although technical resolutions may address the immediate cause of an outage, effective communication is essential in managing customer impact and shaping public perception—often influencing stakeholders’ views more strongly than the issue itself. Within fintech, a company's reputation is not built solely on product features or interface design, but rather on the perceived security of critical assets such as life savings, retirement funds, or business payrolls. In this high-stakes environment, even brief outages or minor data breaches are perceived by clients as threats to their financial security. ... While the natural instinct during a crisis (like a cyber breach or operational failure) is to remain silent to avoid liability, silence actually amplifies damage. In the first 48 hours, what is said—or not said—often determines how a business is remembered. Post-incident communication (PIC) is the bridge between panic and peace of mind. Done poorly, it looks like corporate double-speak. Done well, it demonstrates a level of maturity and transparency that your competitors might lack. ... H2H communication acknowledges the user’s frustration rather than just providing a technical error code. It recognizes the real-world impact on people, not just systems. Admitting mistakes and showing sincere remorse, rather than using defensive, legalistic language, makes a company more relatable and trustworthy. Using natural, conversational language makes the communication feel sincere rather than like an automated, cold response.


Why AI success hinges on knowledge infrastructure and operational discipline

Many organisations assume that if information exists, it is usable for GenAI, but enterprise content is often fragmented, inconsistently structured, poorly contextualised, and not governed for machine consumption. During pilots, this gap is less visible because datasets are curated, but scaling exposes the full complexity of enterprise knowledge. Conflicting versions, missing context, outdated material, and unclear ownership reduce performance and erode confidence, not because models are incapable, but because the knowledge they depend on is unreliable at scale. ... Human-in-the-loop processes struggle to keep pace with scale. Successful deployments treat HITL as a tiered operating structure with explicit thresholds, roles, and escalation paths. Pilot-style broad review collapses under volume; effective systems route only low-confidence or high-risk outputs for human intervention. ... Learning compounds over time as every intervention is captured and fed back into the system, reducing repeated manual review. Operationally, human-in-the-loop teams function within defined governance frameworks, with explicit thresholds, escalation paths, and direct integration into production workflows to ensure consistency at scale. In short, a production-grade human-in-the-loop model is not an extension of BPO but an operating capability combining domain expertise, governance, and system learning to support intelligent systems reliably.


Why short-lived systems need stronger identity governance

Consider the lifecycle of a typical microservice. In its journey from a developer’s laptop to production, it might generate a dozen distinct identities: a GitHub token for the repository, a CI/CD service account for the build, a registry credential to push the container, and multiple runtime roles to access databases, queues and logging services. The problem is not just volume; it is invisibility. When a developer leaves, HR triggers an offboarding process. Their email is cut, their badge stops working. But what about the five service accounts they hardcoded into a deployment script three years ago? ... In reality, test environments are often where attackers go first. It is the path of least resistance. We saw this play out in the Microsoft Midnight Blizzard attack. The attackers did not burn a zero-day exploit to break down the front door; they found a legacy test tenant that nobody was watching closely. ... Our software supply chain is held together by thousands of API keys and secrets. If we continue to rely on long-lived static credentials to glue our pipelines together, we are building on sand. Every static key sitting in a repo—no matter how private you think it is—is a ticking time bomb. It only takes one developer to accidentally commit a .env file or one compromised S3 bucket to expose the keys to the kingdom. ... Paradoxically, by trying to control everything with heavy-handed gates, we end up with less visibility and less control. The goal of modern identity governance shouldn’t be to say “no” more often; it should be to make the secure path the fastest path.


India's E-Rupee Leads the Secure Adoption of CBDCs

India has the e-rupee, which will eventually be used as a legal tender for domestic payments as well as for international transactions and cross-border payments. Ever since RBI launched the e-rupee, or digital rupee, in December 2022, there has been between INR 400 to 500 crore - or $44 to $55 million - in circulation. Many Indian banks are participating in this pilot project. ... Building broad awareness of CBDCs as a secure method for financial transactions is essential. Government and RBI-led awareness campaigns highlighting their security capability can strengthen user confidence and drive higher adoption and transaction volumes. People who have lost money due to QR code scams, fake calls, malicious links and other forms of payment fraud need to feel confident about using CBDCs. IT security companies are also cooperating with RBI to provide data confidentiality, transaction confidentiality and transaction integrity. E-transactions will be secured by hashing, digital signing and [advanced] encryption standards such as AES-192. This can ensure that the transaction data is not tampered with or altered. ... HSMs use advanced encryption techniques to secure transactions and keys. The HSM hardware [boxes] act as cryptographic co-processors and accelerate the encryption and decryption processes to minimize latency in financial transactions. 


Daily Tech Digest - January 18, 2026


Quote for the day:

"Surround yourself with great people; delegate authority; get out of the way" -- Ronald Reagan



Data sovereignty: an existential issue for nations and enterprises

Law-making bodies have in recent years sought to regulate data flows to strengthen their citizens’ rights – for example, the EU bolstering individual citizens’ privacy through the General Data Protection Regulation (GDPR). This kind of legislation has redefined companies’ scope for storing and processing personal data. By raising the compliance bar, such measures are already reshaping C-level investment decisions around cloud strategy, AI adoption and third-party access to their corporate data. ... Faced with dynamic data sovereignty risks, enterprises have three main approaches ahead of them: First, they can take an intentional risk assessment approach. They can define a data strategy addressing urgent priorities, determining what data should go where and how it should be managed - based on key metrics such as data sensitivity, the nature of personal data, downstream impacts, and the potential for identification. Such a forward-looking approach will, however, require a clear vision and detailed planning. Alternatively, the enterprise could be more reactive and detach entirely from its non-domestic public cloud service providers. This is riskier, given the likely loss of access to innovation and, worse, the financial fallout that could undermine their pursuit of key business objectives. Lastly, leaders may choose to do nothing and hope that none of these risks directly affects them. This is the highest-risk option, leaving no protection from potentially devastating financial and reputational consequences of an ineffective data sovereignty strategy.


Verification Debt: When Generative AI Speeds Change Faster Than Proof

Software delivery has always lived with an imbalance. It is easier to change a system than to demonstrate that the change is safe under real workloads, real dependencies, and real failure modes. ... The risk is not that teams become careless. The risk is that what looks correct on the surface becomes abundant while evidence remains scarce. ... A useful name for what accumulates in the mismatch is verification debt. It is the gap between what you released and what you have demonstrated, with evidence gathered under conditions that resemble production, to be safe and resilient. Technical debt is a bet about future cost of change. Verification debt is unknown risk you are running right now. Here, verification does not mean theorem proving. It means evidence from tests, staged rollouts, security checks, and live production signals that is strong enough to block a release or trigger a rollback. It is uncertainty about runtime behavior under realistic conditions, not code cleanliness, not maintainability, and not simply missing unit tests. If you want to spot verification debt without inventing new dashboards, look at proxies you may already track. ... AI can help with parts of verification. It can suggest tests, propose edge cases, and summarize logs. It can raise verification capacity. But it cannot conjure missing intent, and it cannot replace the need to exercise the system and treat the resulting evidence as strong enough to change the release decision. Review is helpful. Review is evidence of readability and intent.


Executive-level CISO titles surge amid rising scope strain

Executive-level CISOs were more likely to report outside IT than peers with VP or director titles, according to the findings. The report frames this as part of a broader shift in how organisations place accountability for cyber risk and oversight. The findings arrive as boards and senior executives assess cyber exposure alongside other enterprise risks. The report links these expectations to the need for security leaders to engage across legal, risk, operations and other functions. ... Smaller organisations and industries with leaner security teams showed the highest levels of strain, the report says. It adds that CISOs warn these imbalances can delay strategic initiatives and push teams towards reactive security operations. The report positions this issue as a management challenge as well as a governance question. It links scope creep with wider accountability and higher expectations on security leaders, even where budgets and staffing remain constrained. ... Recruiters and employers have watched turnover trends closely as demand for senior security leadership has remained high across many sectors. The report suggests that title, scope and reporting structure form part of how CISOs evaluate roles. ... "The demand for experienced CISOs remains strong as the role continues to become more complex and more 'executive'," said Martano. "Understanding how organizations define scope, reporting structure, and leadership access and visibility is critical for CISOs planning their next move and for companies looking to hire or retain security leaders."


What’s in, and what’s out: Data management in 2026 has a new attitude

Data governance is no longer a bolt-on exercise. Platforms like Unity Catalog, Snowflake Horizon and AWS Glue Catalog are building governance into the foundation itself. This shift is driven by the realization that external governance layers add friction and rarely deliver reliable end-to-end coverage. The new pattern is native automation. Data quality checks, anomaly alerts and usage monitoring run continuously in the background. ... Companies want pipelines that maintain themselves. They want fewer moving parts and fewer late-night failures caused by an overlooked script. Some organizations are even bypassing pipes altogether. Zero ETL patterns replicate data from operational systems to analytical environments instantly, eliminating the fragility that comes with nightly batch jobs. ... Traditional enterprise warehouses cannot handle unstructured data at scale and cannot deliver the real-time capabilities needed for AI. Yet the opposite extreme has failed too. The highly fragmented Modern Data Stack scattered responsibilities across too many small tools. It created governance chaos and slowed down AI readiness. Even the rigid interpretation of Data Mesh has faded. ... The idea of humans reviewing data manually is no longer realistic. Reactive cleanup costs too much and delivers too little. Passive catalogs that serve as wikis are declining. Active metadata systems that monitor data continuously are now essential.


How Algorithmic Systems Automate Inequality

The deployment of predictive analytics in public administration is usually justified by the twin pillars of austerity and accuracy. Governments and private entities argue that automated decision-making systems reduce administrative bloat while eliminating the subjectivity of human caseworkers. ... This dynamic is clearest in the digitization of the welfare state. When agencies turn to machine learning to detect fraud, they rarely begin with a blank slate, training their models on historical enforcement data. Because low-income and minority populations have historically been subject to higher rates of surveillance and policing, these datasets are saturated with selection bias. The algorithm, lacking sociopolitical context, interprets this over-representation as an objective indicator of risk, identifying correlation and deploying it as causality. ... Algorithmic discrimination, however, is diffuse and difficult to contest. A rejected job applicant or a flagged welfare recipient rarely has access to the proprietary score that disqualified them, let alone the training data or the weighting variable—they face a black box that offers a decision without a rationale. This opacity makes it nearly impossible for an individual to challenge the outcome, effectively insulating the deploying organisation from accountability. ... Algorithmic systems do not observe the world directly; they inherit their view of reality from datasets shaped by prior policy choices and enforcement practices. To assess such systems responsibly requires scrutiny of the provenance of the data on which decisions are built and the assumptions encoded in the variables selected.


DevSecOps for MLOps: Securing the Full Machine Learning Lifecycle

The term "MLSecOps" sounds like consultant-speak. I was skeptical too. But after auditing ML pipelines at eleven companies over the past eighteen months, I've concluded we need the term because we need the concept — extending DevSecOps practices across the full machine learning lifecycle in ways that account for ML-specific threats. The Cloud Security Alliance's framework is useful here. Securing ML systems means protecting "the confidentiality, integrity, availability, and traceability of data, software, and models." That last word — traceability — is where most teams fail catastrophically. In traditional software, you can trace a deployed binary back to source code, commit hash, build pipeline, and even the engineer who approved the merge. ... Securing ML data pipelines requires adopting practices that feel tedious until the day they save you. I'm talking about data validation frameworks, dataset versioning, anomaly detection at ingestion, and schema enforcement like your business depends on it — because it does. Last September, I worked with an e-commerce company deploying a recommendation model. Their data pipeline pulled from fifteen different sources — user behavior logs, inventory databases, third-party demographic data. Zero validation beyond basic type checking. We implemented Great Expectations — an open-source data validation framework — as a mandatory CI check. 


Autonomous Supply Chains: Catalyst for Building Cyber-Resilience

Autonomous supply chains are becoming essential for building resilience amid rising global disruptions. Enabled by a strong digital core, agentic architecture, AI and advanced data-driven intelligence, together with IoT and robotics, they facilitate operations that continuously learn, adapt and optimize across the value chain. ... Conventional thinking suggests that greater autonomy widens the attack surface and diminishes human oversight turning it into a security liability. However, if designed with cyber resilience at its core, autonomous supply chain can act like a “digital immune system,” becoming one of the most powerful enablers of security. ... As AI operations and autonomous supply chains scale, traditional perimeter simply won’t work. Organizations must adopt a Zero Trust security model to eliminate implicit trust at every access point. A Zero Trust model, centered on AI-driven identity and access management, ensures continuous authentication, network micro-segmentation and controlled access across users, devices and partners. By enforcing “never trust, always verify,” organizations can minimize breach impact and contain attackers from freely moving across systems, maintaining control even in highly automated environments. ... Autonomy in the supply chain thrives on data sharing and connectivity across suppliers, carriers, manufacturers, warehouses and retailers, making end-to-end visibility and governance vital for both efficiency and security. 


When enterprise edge cases become core architecture

What matters most is not the presence of any single technology, but the requirements that come with it. Data that once lived in separate systems now must be consistent and trusted. Mobile devices are no longer occasional access points but everyday gateways. Hiring workflows introduce identity and access considerations sooner than many teams planned for. As those realities stack up, decisions that once arrived late in projects are moving closer to the start. Architecture and governance stop being cleanup work and start becoming prerequisites. ... AI is no longer layered onto finished systems. Mobile is no longer treated as an edge. Hiring is no longer insulated from broader governance and security models. Each of these shifts forces organizations to think earlier about data, access, ownership and interoperability than they are used to doing. What has changed is not just ambition, but feasibility. AI can now work across dozens of disparate systems in ways that were previously unrealistic. Long-standing integration challenges are no longer theoretical problems. They are increasingly actionable -- and increasingly unavoidable. ... As a result, integration, identity and governance can no longer sit quietly in the background. These decisions shape whether AI initiatives move beyond experimentation, whether access paths remain defensible and whether risk stays contained or spreads. Organizations that already have a clear view of their data, workflows and access models will find it easier to adapt. 


Why New Enterprise Architecture Must Be Built From Steel, Not Straw

Architecture must reflect future ambition. Ideally, architects build systems with a clear view of where the product and business are heading. When a system architecture is built for the present situation, it’s likely lacking in flexibility and scalability. That said, sound strategic decisions should be informed by well-attested or well-reasoned trends, not just present needs and aspirations. ... Tech leaders should avoid overcommitting to unproven ideas—i.e., not get "caught up" in the hype. Safe experimentation frameworks (from hypothesis to conclusion) reduce risk by carefully applying best practices to testing out approaches. In a business context with something as important as the technology foundation the organization runs in, do not let anyone mischaracterize this as timidity. Critical failure is a career-limiting move, and potentially an organizational catastrophe. ... The art lies in designing systems that can absorb future shifts without constant rework. That comes from aligning technical decisions not only with what the company is today, but also what it intends to become. Future-ready architecture isn’t the comparatively steady and predictable discipline it was before AI-enabled software features. As a consequence, there’s wisdom in staying directional, rather than architecting for the next five years. Align technical decisions with long-term vision but built with optionality wherever possible. 


Why Engineering Culture Is Everything: Building Teams That Actually Work

The culture is something that is a fact and it's also something intrinsic with human beings. We're people, we have a background. We were raised in one part of the world versus another. We have the way that we talk and things that we care about. All those things influence your team indirectly and directly. It's really important, you as a leader, to be aware that as an engineer, I use a lot of metaphors from monitoring and observability. We always talk about known knowns, known unknowns, and unknown unknowns. Those are really important to understand on a systems level, period, because your social technical system is also a system. The people that you work with, the way you work, your organization, it's a system. And if you're not aware of what are the metrics you need to track, what are the things that are threats to it, the good old strengths, weaknesses, opportunities, and threats. ... What we can learn from other industries is their lessons. Again, we are now on yet another industrial revolution. This time it's more of a knowledge revolution. We can learn from civil engineering like, okay, when the brick was invented, that was a revolution. When the brick was invented, what did people do in order to make sure that bricks matter? That's a fascinating and very curious story about the Freemasons. People forget the Freemasons were a culture about making sure that these constructions techniques, even more than the technologies, the techniques, were up to standards. 

Daily Tech Digest - October 29, 2025


Quote for the day:

“If you don’t have a competitive advantage, don’t compete.” -- Jack Welch


Intuit learned to build AI agents for finance the hard way: Trust lost in buckets, earned back in spoonfuls

Intuit's technical strategy centers on a fundamental design decision. For financial queries and business intelligence, the system queries actual data, rather than generating responses through large language models (LLMs). Also critically important: That data isn't all in one place. Intuit's technical implementation allows QuickBooks to ingest data from multiple distinct sources: native Intuit data, OAuth-connected third-party systems like Square for payments and user-uploaded files such as spreadsheets containing vendor pricing lists or marketing campaign data. This creates a unified data layer that AI agents can query reliably. ... Beyond the technical architecture, Intuit has made explainability a core user experience across its AI agents. This goes beyond simply providing correct answers: It means showing users the reasoning behind automated decisions. When Intuit's accounting agent categorizes a transaction, it doesn't just display the result; it shows the reasoning. This isn't marketing copy about explainable AI, it's actual UI displaying data points and logic. ... In domains where accuracy is critical, consider whether you need content generation or data query translation. Intuit's decision to treat AI as an orchestration and natural language interface layer dramatically reduces hallucination risk and avoids using AI as a generative system.


Step aside, SOC. It’s time to ROC

The typical SOC playbook is designed to contain or remediate issues after the fact by applying a patch or restoring a backup, but they don’t anticipate or prevent the next hit. That structure leaves executives without the proper context or language they need to make financially sound decisions about their risk exposure. ... At its core, the Resilience Risk Operations Center (ROC) is a proactive intelligence hub. Think of it as a fusion center in which cyber, business and financial risk come together to form one clear picture. While the idea of a ROC isn’t entirely new — versions of it have existed across government and private sectors — the latest iterations emphasize collaboration between technical and financial teams to anticipate, rather than react to, threats. ... Of course, building the ROC wasn’t all smooth sailing. Just like military adversaries, cyber criminals are constantly evolving and improving. Scarier yet, just a single keystroke by a criminal actor can set off a chain reaction of significant disruptions. That makes trying to anticipate their next move feel like playing chess against an opponent who is changing the rules mid-game. There was also the challenge of breaking down the existing silos between cyber, risk and financial teams. ... The ROC concept represents the first real step in that journey towards cyber resilience. It’s not as a single product or platform, but as a strategic shift toward integrated, financially informed cyber defense. 


Data Migration in Software Modernization: Balancing Automation and Developers’ Expertise

The process of data migration is often far more labor-intensive than expected. We've only described a few basic features, and even implementing this little set requires splitting a single legacy table into three normalized tables. In real-world scenarios, the number of such transformations is often significantly higher. Additionally, consider the volume of data handled by applications that have been on the market for decades. Migrating such data structures is a major task. The amount of custom logic a developer must implement to ensure data integrity and correct representation can be substantial. ... Automated data migration tools can help developers migrate to a different database management system or to a new version of the DBMS in use, applying the required data manipulations to ensure accurate representation. Also, they can copy the id, email, and nickname fields with little trouble. Possibly, there will be no issues with replicating the old users table into a staging environment. Automated data migration tools can’t successfully perform the tasks required for the use case we described earlier. For instance, infer gender from names (e.g., determine "Sarah" is female, "John" is male), or populate the interests table dynamically from user-provided values. Also, there could be issues with deduplicating shared interests across users (e.g., don’t insert "kitchen gadgets" twice) or creating the correct many-to-many relationships in user_interests.


The Quiet Rise of AI’s Real Enablers

“Models need so much more data and in multiple formats,” shared George Westerman, Senior Lecturer and Principal Research Scientist, MIT Sloan School of Management. “Where it used to be making sense of structured data, which was relatively straightforward, now it’s: ‘What do we do with all this unstructured data? How do we tag it? How do we organize it? How do we store it?’ That’s a bigger challenge.” ... As engineers get pulled deeper into AI work, their visibility is rising. So is their influence on critical decisions. The report reveals that data engineers are now helping shape tooling choices, infrastructure plans, and even high-level business strategy. Two-thirds of the leaders say their engineers are involved in selecting vendors and tools. More than half say they help evaluate AI use cases and guide how different business units apply AI models. That represents a shift from execution to influence. These engineers are no longer just implementing someone else’s ideas. They are helping define the roadmap. It also signals something bigger. AI success is not just about algorithms. It is about coordination. ... So the role and visibility of data engineers are clearly changing. But are we seeing real gains in productivity? The report suggests yes. More than 70 percent of tech leaders said AI tools are already making their teams more productive. The workload might be heavier, but it’s also more focused. Engineers are spending less time fixing brittle pipelines and more time shaping long-term infrastructure.


The silent killer of CPG digital transformation: Data & knowledge decay

Data without standards is chaos. R&D might record sugar levels as “Brix,” QA uses “Bx,” and marketing reduces it to “sweetness score.” When departments speak different data languages, integration becomes impossible. ... When each function hoards its own version of the truth, leadership decisions are built on fragments. At one CPG I observed, R&D reported a product as cost-neutral to reformulate, while supply chain flagged a 12% increase. Both were “right” based on their datasets — but the company had no harmonized golden record. ... Senior formulators and engineers often retire or are poached, taking decades of know-how with them. APQC warns that unmanaged knowledge loss directly threatens innovation capacity and recommends systematic capture methods. I’ve seen this play out: a CPG lost its lead emulsification expert to a competitor. Within six months, their innovation pipeline slowed dramatically, while their competitor accelerated. The knowledge wasn’t just valuable — it was strategic. ... Intuition still drives most big CPG decisions. While human judgment is critical, relying on gut feel alone is dangerous in the age of AI-powered formulation and predictive analytics. ... Define enterprise-wide data standards: Create master schemas for formulations, processes and claims. Mandate structured inputs. Henkel’s success demonstrates that without shared standards, even the best tools underperform.


From Chef to CISO: An Empathy-First Approach to Cybersecurity Leadership

Rather than focusing solely on technical credentials or a formal cybersecurity education, Lyons prioritizes curiosity and hunger for learning as the most critical qualities in potential hires. His approach emphasizes empathy as a cornerstone of security culture, encouraging his team to view security incidents not as failures to be punished, but as opportunities to coach and educate colleagues. ... We're very technically savvy and it's you have a weak moment or you get distracted because you're a busy person. Just coming at it and approaching it with a very thoughtful culture-oriented response is very important for me. Probably the top characteristic of my team. I'm super fortunate. And that I have people from ages, from end to end, backgrounds from end to end that are all part of the team. But one of those core principles that they all follow with is empathy and trying to grow culture because culture scales. ... anyone who's looking at adopting new technologies in the cybersecurity world is firstly understand that the attackers have access to just about everything that you have. So, they're going to come fast and they're going to come hard at you and its they can make a lot more mistakes than you have. So, you have to focus and ensure that you're getting right every day what they can have the opportunity to get wrong. 


It takes an AWS outage to prioritize diversification

AWS’s latest outage, caused by a data center malfunction in Northern Virginia, didn’t just disrupt its direct customers; it served as a stark reminder of how deeply our digital world relies on a select few cloud giants. A single system hiccup in one region reverberated worldwide, stopping critical services for millions of users. ... The AWS outage is part of a broader pattern of instability common to centralized systems. ... The AWS outage has reignited a longstanding argument for organizational diversification in the cloud sector. Diversification enhances resilience. It decentralizes an enterprise’s exposure to risks, ensuring that a single provider’s outage doesn’t completely paralyze operations. However, taking this step will require initiative—and courage—from IT leaders who’ve grown comfortable with the reliability and scale offered by dominant providers. This effort toward diversification isn’t just about using a multicloud strategy (although a combined approach with multiple hyperscalers is an important aspect). Companies should also consider alternative platforms and solutions that add unique value to their IT portfolios. Sovereign clouds, specialized services from companies like NeoCloud, managed service providers, and colocation (colo) facilities offer viable options. Here’s why they’re worth exploring. ... The biggest challenge might be psychological rather than technical. Many companies have internalized the idea that the hyperscalers are the only real options for cloud infrastructure.


What brain privacy will look like in the age of neurotech

What Meta has just introduced, what Apple has now made native as part of its accessibility protocols, is to enable picking up your intentions through neural signals and sensors that AI decodes to allow you to navigate through all of that technology. So I think the first generation of most of these devices will be optional. That is, you can get the smart watch without the neural band, you can get the airpods without the EEG [electroencephalogram] sensors in them. But just like you can't get an Apple watch now without getting an Apple watch with a heart rate sensor, second and third generation of these devices, I think your only option will be to get the devices that have the neural sensors in them. ... There's a couple of ways to think about hacking. One is getting access to what you're thinking and another one is changing what you're thinking. One of the now classic examples in the field is how researchers were able to, when somebody was using a neural headset to play a video game, embed prompts that the conscious mind wouldn't see to be able to figure out what the person's PIN code and address were for their bank account and mailing address. In much the same way that a person's mind could be probed for how they respond to Communist messaging, a person's mind could be probed to see recognition of a four digit code or some combination of numbers and letters to be able to try to get to a person's password without them even realizing that's what's happening.


Beyond Alerts and Algorithms: Redefining Cyber Resilience in the Age of AI-Driven Threats

In an average enterprise Security Operations Center (SOC), analysts face tens of thousands of alerts daily. Even the most advanced SIEM or EDR platforms struggle with false positives, forcing teams to spend the bulk of their time sifting through noise instead of investigating real threats. The result is a silent crisis: SOC fatigue. Skilled analysts burn out, genuine threats slip through, and the mean time to respond (MTTR) increases dangerously. But the real issue isn’t just too many alerts — it’s the lack of context. Most tools operate in isolation. An endpoint alert means little without correlation to user behavior, network traffic, or threat intelligence. Without this contextual layer, detection lacks depth and intent remains invisible. ... Resilience, however, isn’t achieved once — it’s engineered continuously. Techniques like Continuous Automated Red Teaming (CART) and Breach & Attack Simulation (BAS) allow enterprises to test, validate, and evolve their defenses in real time. AI won’t replace human judgment — it enhances it. The SOC of the future will be machine-accelerated yet human-guided, capable of adapting dynamically to evolving threats. ... Today’s CISOs are more than security leaders — they’re business enablers. They sit at the intersection of risk, technology, and trust. Boards now expect them not just to protect data, but to safeguard reputation and ensure continuity.


Quantum Circuits brings dual-rail qubits to Nvidia’s CUDA-Q development platform

Quantum Circuits’ dual-rail chip means that it combines two different quantum computing approaches — superconducting resonators with transmon qubits. The qubit itself is a photon, and there’s a superconducting circuit that controls the photon. “It matches the reliability benchmarks of ions and neutral atoms with the speed of the superconducting platform,” says Petrenko. There’s another bit of quantum magic built into the platform, he says — error awareness. “No other quantum computer tells you in real time if it encounters an error, but ours does,” he says. That means that there’s potential to correct errors before scaling up, rather than scaling up first and then trying to do error correction later. In the near-term, the high reliability and built-in error correction makes it an extremely powerful tool for developing new algorithms, says Petrenko. “You can start kind of opening up a new door and tackling new problems. We’ve leveraged that already for showing new things for machine learning.” It’s a different approach to what other quantum computer makers are taking, confirms TechInsights’ Sanders. According to Sanders, this dual-rail method combines the best of both types of qubits, lengthening coherence time, plus integrating error correction. Right now, Seeker is only available via Quantum Circuits’ own cloud platform and only has eight qubits.

Daily Tech Digest - July 13, 2024

Work in the Wake of AI: Adapting to Algorithmic Management and Generative Technologies

Current legal frameworks are struggling to keep pace with the issues arising from algorithmic management. Traditional employment laws, such as those concerning unfair dismissal, often do not extend protections to “workers” as a distinct category. Furthermore, discrimination laws require proof that the discriminatory behaviour was due or related to the protected characteristic, which is difficult to ascertain and prove with algorithmic systems. To mitigate these issues, the researchers recommend a series of measures. These include ensuring algorithmic systems respect workers’ rights, granting workers the right to opt out of automated decisions such as job termination, banning excessive data monitoring and establishing the right to a human explanation for decisions made by algorithms. ... Despite the rapid deployment of GenAI and the introduction of policies around its use, concerns about misuse are still prevalent among nearly 40% of tech leaders. While recognising AI’s potential, 55% of tech leaders have yet to identify clear business applications for GenAI beyond personal productivity enhancements, and budget constraints remain a hurdle for some.


The rise of sustainable data centers: Innovations driving change

Data centers contribute significantly to global carbon emissions, making it essential to adopt measures that reduce their carbon footprint. Carbon usage effectiveness (CUE) is a metric used to assess a data center's carbon emissions relative to its energy consumption. By minimizing CUE, data centers can significantly lower their environmental impact. ... Cooling is one of the largest energy expenses for data centers. Traditional air cooling systems are often inefficient, prompting the need for more advanced solutions. Free cooling, which leverages outside air, is a cost-effective method for data centers in cooler climates. Liquid cooling, on the other hand, uses water or other coolants to transfer heat away from servers more efficiently than air. ... Building and retrofitting data centers sustainably involves adhering to green building certifications like Leadership in Energy and Environmental Design (LEED) and Building Research Establishment Environmental Assessment Method (BREEAM). These certifications ensure that buildings meet high environmental performance standards.


How AIOps Is Poised To Reshape IT Operations

A meaningfully different, as yet underutilized, high-value data set can be derived from the rich, complex interactions of information sources and users on the network, promising to triangulate and correlate with the other data sets available, elevating their combined value to the use case at hand. The challenge in leveraging this source is that the raw traffic data is impossibly massive and too complex for direct ingestion. Further, even compressed into metadata, without transformation, it becomes a disparate stream of rigid, high-cardinality data sets due to its inherent diversity and complexity. A new breed of AIOps solutions is poised to overcome this data deficiency and transform this still raw data stream into refined collections of organized data streams that are augmented and edited through intelligent feature extraction. These solutions use an adaptive AI model and a multi-step transformation sequence to work as an active member of a larger AIOps ecosystem by harmonizing data feeds with the workflows running on the target platform, making it more relevant and less noisy.


Addressing Financial Organizations’ Digital Demands While Avoiding Cyberthreats

The financial industry faces a difficult balancing act, with multiple conflicting priorities at the forefront. Organizations must continually strengthen security around their evolving solutions to keep up in an increasingly competitive and fast-moving landscape. But while strong security is a requirement, it cannot impact usability for customers or employees in an industry where accessibility, agility and the overall user experience are key differentiators. One of the best options to balancing these priorities is the utilization of secure access service edge (SASE) solutions. This model integrates several different security features such as secure web gateway (SWG), zero-trust network access (ZTNA), next-generation firewall (NGFW), cloud access security broker (CASB), data loss prevention (DLP) and network management functions, such as SD-WAN, into a single offering delivered via the cloud. Cloud-based delivery enables financial organizations to easily roll out SASE services and consistent policies to their entire network infrastructure, including thousands of remote workers scattered across various locations, or multiple branch offices to protect private data and users, as well as deployed IoT devices.


Three Signs You Might Need a Data Fabric

One of the most significant challenges organizations face is data silos and fragmentation. As businesses grow and adopt new technologies, they often accumulate disparate data sources across different departments and platforms. These silos make it tougher to have a holistic view of your organization's data, resulting in inefficiencies and missed opportunities. ... You understand that real-time analytics is crucial to your organization’s success. You need to respond quickly to changing market conditions, customer behavior, and operational events. Traditional data integration methods, which often rely on batch processing, can be too slow to meet these demands. You need real-time analytics to:Manage the customer experience. If enhancing a customer’s experience through personalized and timely interactions is a priority, real-time analytics is essential. Operate efficiently. Real-time monitoring and analytics can help optimize operations, reduce downtime, and improve overall efficiency. Handle competitive pressure. Staying ahead of competitors requires quick adaptation to market trends and consumer demands, which is facilitated by real-time insights.


The Tension Between The CDO & The CISO: The Balancing Act Of Data Exploitation Versus Protection

While data delivers a significant competitive advantage to companies when used appropriately, without the right data security measures in place it can be misused. This not only erodes customers’ trust but also puts the company at risk of having to pay penalties and fines for non-compliance with data security regulations. As data teams aim to extract and exploit data for the benefit of the organisation, it is important to note that not all data is equal. As such a risk-based approach must be in place to limit access to sensitive data across the organisation. In doing this the IT system will have access to the full spectrum of data to join and process the information, run through models and identify patterns, but employees rarely need access to all this detail. ... To overcome the conflict of data exploitation versus security and deliver a customer experience that meets customer expectations, data teams and security teams need to work together to achieve a common purpose and align on the culture. To achieve this each team needs to listen to and understand their respective needs and then identify solutions that work towards helping to make the other team successful.


Content Warfare: Combating Generative AI Influence Operations

Moderating such enormous amounts of content by human beings is impossible. That is why tech companies now employ artificial intelligence (AI) to moderate content. However, AI content moderation is not perfect, so tech companies add a layer of human moderation for quality checks to the AI content moderation processes. These human moderators, contracted by tech companies, review user-generated content after it is published on a website or social media platform to ensure it complies with the “community guidelines” of the platform. However, generative AI has forced companies to change their approach to content moderation. ... Countering such content warfare requires collaboration across generative AI companies, social media platforms, academia, trust and safety vendors, and governments. AI developers should build models with detectable and fact-sensitive outputs. Academics should research the mechanisms of foreign and domestic influence operations emanating from the use of generative AI. Governments should impose restrictions on data collection for generative AI, impose controls on AI hardware, and provide whistleblower protection to staff working in the generative AI companies. 


OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework

OpenAI isn't alone in attempting to quantify levels of AI capabilities. As Bloomberg notes, OpenAI's system feels similar to levels of autonomous driving mapped out by automakers. And in November 2023, researchers at Google DeepMind proposed their own five-level framework for assessing AI advancement, showing that other AI labs have also been trying to figure out how to rank things that don't yet exist. OpenAI's classification system also somewhat resembles Anthropic's "AI Safety Levels" (ASLs) first published by the maker of the Claude AI assistant in September 2023. Both systems aim to categorize AI capabilities, though they focus on different aspects. Anthropic's ASLs are more explicitly focused on safety and catastrophic risks (such as ASL-2, which refers to "systems that show early signs of dangerous capabilities"), while OpenAI's levels track general capabilities. However, any AI classification system raises questions about whether it's possible to meaningfully quantify AI progress and what constitutes an advancement. The tech industry so far has a history of overpromising AI capabilities, and linear progression models like OpenAI's potentially risk fueling unrealistic expectations.


White House Calls for Defending Critical Infrastructure

The memo encourages federal agencies "to consult with regulated entities to establish baseline cybersecurity requirements that can be applied across critical infrastructures" while maintaining agility and adaptability to mature with the evolving cyberthreat landscape. ONCD and OMB also urged agencies and federal departments to study open-source software initiatives and the benefits that can be gained by establishing a governance function for open-source projects modeled after the private sector. Budget submissions should identify existing departments and roles designed to investigate, disrupt and dismantle cybercrimes, according to the memo, including interagency task forces focused on combating ransomware infrastructure and the abuse of virtual currency. Meanwhile, the administration is continuing its push for agencies to only use software provided by developers who can attest their compliance with minimum secure software development practices. The national cyber strategy - as well as the joint memo - directs agencies to "utilize grant, loan and other federal government funding mechanisms to ensure minimum security and resilience requirements" are incorporated into critical infrastructure projects.


Unifying Analytics in an Era of Advanced Tech and Fragmented Data Estates

“Data analytics has a last-mile problem,” according to Alex Gnibus, technical product marketing manager, architecture at Alteryx. “In shipping and transportation, you often think of the last-mile problem as that final stage of getting the passenger or the delivery to its final destination. And it’s often the most expensive and time-consuming part.” For data, there is a similar problem; when putting together a data stack, enabling the business at large to derive value from the data is a key enabler—and challenge—of a modern enterprise. Achieving business value from data is the last mile, which is made difficult by complex, numerous technologies that are inaccessible to the final business user. Gnibus explained that Alteryx solves this problem by acting as the “truck” that delivers tangible business value from proprietary data, offering data discovery, use case identification, preparation and analysis, insight-sharing, and AI-powered capabilities. Acting as the easy-to-use interface for a business’ data infrastructure, Alteryx is the AI platform for large-scale enterprise analytics that offers no-code, drag-and-drop functionality that works with your unique data framework configuration as it evolves.



Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel

Daily Tech Digest - May 29, 2024

Algorithmic Thinking for Data Scientists

While data scientists with computer science degrees will be familiar with the core concepts of algorithmic thinking, many increasingly enter the field with other backgrounds, ranging from the natural and social sciences to the arts; this trend is likely to accelerate in the coming years as a result of advances in generative AI and the growing prevalence of data science in school and university curriculums. ... One topic that deserves special attention in the context of algorithmic problem solving is that of complexity. When comparing two different algorithms, it is useful to consider the time and space complexity of each algorithm, i.e., how the time and space taken by each algorithm scales relative to the problem size (or data size). ... Some algorithms may manifest additive or multiplicative combinations of the above complexity levels. E.g., a for loop followed by a binary search entails an additive combination of linear and logarithmic complexities, attributable to sequential execution of the loop and the search routine, respectively.


Job seekers and hiring managers depend on AI — at what cost to truth and fairness?

The darker side to using AI in hiring is that it can bypass potential candidates based on predetermined criteria that don’t necessarily take all of a candidate’s skills into account. And for job seekers, the technology can generate great-looking resumes, but often they’re not completely truthful when it comes to skill sets. ... “AI can sound too generic at times, so this is where putting your eyes on it is helpful,” Toothacre said. She is also concerned about the use of AI to complete assessments. “Skills-based assessments are in place to ensure you are qualified and check your knowledge. Using AI to help you pass those assessments is lying about your experience and highly unethical.” There’s plenty of evidence that genAI can improve resume quality, increase visibility in online job searches, and provide personalized feedback on cover letters and resumes. However, concerns about overreliance on AI tools, lack of human touch in resumes, and the risk of losing individuality and authenticity in applications are universal issues that candidates need to be mindful of regardless of their geographical location, according to Helios’ Hammell.


Comparing smart contracts across different blockchains from Ethereum to Solana

Polkadot is designed to enable interoperability among various blockchains through its unique architecture. The network’s core comprises the relay chain and parachains, each playing a distinct role in maintaining the system’s functionality and scalability. ... Developing smart contracts on Cardano requires familiarity with Haskell for Plutus and an understanding of Marlowe for financial contracts. Educational resources like the IOG Academy provide learning paths for developers and financial professionals. Tools like the Marlowe Playground and the Plutus development environment aid in simulating and testing contracts before deployment, ensuring they function as intended. ... Solana’s smart contracts are stateless, meaning the contract logic is separated from the state, which is stored in external accounts. This separation enhances security and scalability by isolating the contract code from the data it interacts with. Solana’s account model allows for program reusability, enabling developers to create new tokens or applications by interacting with existing programs, reducing the need to redeploy smart contracts, and lowering costs.


3 things CIOs can do to make gen AI synch with sustainability

“If you’re only buying inference services, ask them how they can account for all the upstream impact,” says Tate Cantrell, CTO of Verne, a UK-headquartered company that provides data center solutions for enterprises and hyperscalers. “Inference output takes a split second. But the only reason those weights inside that neural network are the way they are is because of massive amounts of training — potentially one or two months of training at something like 100 to 400 megawatts — to get that infrastructure the way it is. So how much of that should you be charged for?” Cantrell urges CIOs to ask providers about their own reporting. “Are they doing open reporting about the full upstream impact that their services have from a sustainability perspective? How long is the training process, how long is it valid for, and how many customers did that weight impact?” According to Sundberg, an ideal solution would be to have the AI model tell you about its carbon footprint. “You should be able to ask Copilot or ChatGPT what the carbon footprint of your last query is,” he says. 


EU’s ChatGPT taskforce offers first look at detangling the AI chatbot’s privacy compliance

The taskforce’s report discusses this knotty lawfulness issue, pointing out ChatGPT needs a valid legal basis for all stages of personal data processing — including collection of training data; pre-processing of the data (such as filtering); training itself; prompts and ChatGPT outputs; and any training on ChatGPT prompts. The first three of the listed stages carry what the taskforce couches as “peculiar risks” for people’s fundamental rights — with the report highlighting how the scale and automation of web scraping can lead to large volumes of personal data being ingested, covering many aspects of people’s lives. It also notes scraped data may include the most sensitive types of personal data (which the GDPR refers to as “special category data”), such as health info, sexuality, political views etc, which requires an even higher legal bar for processing than general personal data. On special category data, the taskforce also asserts that just because it’s public does not mean it can be considered to have been made “manifestly” public — which would trigger an exemption from the GDPR requirement for explicit consent to process this type of data.


Avoiding the cybersecurity blame game

Genuine negligence or deliberate actions should be handled appropriately, but apportioning blame and meting out punishment must be the final step in an objective, reasonable investigation. It should certainly not be the default reaction. So far, so reasonable, yes? But things are a little more complicated than this. It’s all very well saying, “don’t blame the individual, blame the company”. Effectively, no “company” does anything; only people do. The controls, processes and procedures that let you down were created by people – just different people. If we blame the designers of controls, processes and procedures… well, we are just shifting blame, which is still counterproductive. ... Managers should use the additional resources to figure out how to genuinely change the work environment in which employees operate and make it easier for them to do their job in a secure practical manner. Managers should implement a circular, collaborative approach to creating a frictionless, safer environment, working positively and without blame.


The decline of the user interface

The Ok and Cancel buttons played important roles. A user might go to a Settings dialog, change a bunch of settings, and then click Ok, knowing that their changes would be applied. But often, they would make some changes and then think “You know, nope, I just want things back like they were.” They’d hit the Cancel button, and everything would reset to where they started. Disaster averted. Sadly, this very clear and easy way of doing things somehow got lost in the transition to the web. On the web, you will often see Settings pages without Ok and Cancel buttons. Instead, you’re expected to click an X in the upper right to make the dialog close, accepting any changes that you’ve made. ... In the newer versions of Windows, I spend a dismayingly large amount of time trying to get the mouse to the right spot in the corner or edge of an application so that I can size it. If I want to move a window, it is all too frequently difficult to find a location at the top of the application to click on that will result in the window being relocated. Applications used to have a very clear title bar that was easy to see and click on.


Lawmakers paint grim picture of US data privacy in defending APRA

At the center of the debate is the American Privacy Rights Act (APRA), the push for a federal data privacy law that would either simplify a patchwork of individual state laws – or run roughshod over existing privacy legislation, depending on which state is offering an opinion. While harmonizing divergent laws seems wise as a general measure, states like California, where data privacy laws are already much stricter than in most places, worry about its preemptive clauses weakening their hard-fought privacy protections. Rodgers says APRA is “an opportunity for a reset, one that can help return us to the American Dream our Founders envisioned. It gives people the right to control their personal information online, something the American people overwhelmingly want,” she says. “They’re tired of having their personal information abused for profit.” From loose permissions on sharing location data to exposed search histories, there are far too many holes in Americans’ digital privacy for Rodgers’ liking. Pointing to the especially sensitive matter of childrens’ data, she says that “as our kids scroll, companies collect nearly every data point imaginable to build profiles on them and keep them addicted. ...”


Picking an iPaaS in the Age of Application Overload

Companies face issues using proprietary integration solutions, as they end up with black-box solutions with limited flexibility. For example, the inability to natively embed outdated technology into modern stacks, such as cloud native supply chains with CI/CD pipelines, can slow down innovation and complicate the overall software delivery process. Companies should favor iPaaS technologies grounded in open source and open standards. Can you deploy it to your container orchestration cluster? Can you plug it into your existing GitOps procedures? Such solutions not only ensure better integration into proven QA-tested procedures but also offer greater freedom to migrate, adapt and debug as needs evolve. ... As organizations scale, so too must their integration solutions. Companies should avoid iPaaS solutions offering only superficial “cloud-washed” capabilities. They should prioritize cloud native solutions designed from the ground up for the cloud, and that leverage container orchestration tools like Kubernetes and Docker Swarm, which are essential for ensuring scalability and resilience.
Shifting left is a cultural and practice shift, but it also includes technical changes to how a shared testing environment is set up. ... The approach scales effectively across engineering teams, as each team or developer can work independently on their respective services or features, thereby reducing dependencies. While this is great advice, it can feel hard to implement in the current development environment: If the process of releasing code to a shared testing cluster takes too much time, it doesn’t seem feasible to test small incremental changes. ... The difference between finding bugs as a user and finding them as a developer is massive: When an operations or site reliability engineer (SRE) finds a problem, they need to find the engineer who released the code, describe the problem they’re seeing, and present some steps to replicate the issue. If, instead, the original developer finds the problem, they can cut out all those steps by looking at the output, finding the cause, and starting on a fix. This proactive approach to quality reduces the number of bugs that need to be filed and addressed later in the development cycle.



Quote for the day:

"The best and most beautiful things in the world cannot be seen or even touched- they must be felt with the heart." -- Helen Keller