Showing posts with label banking. Show all posts
Showing posts with label banking. Show all posts

Daily Tech Digest - September 08, 2025


Quote for the day:

"Let no feeling of discouragement prey upon you, and in the end you are sure to succeed." -- Abraham Lincoln


Coding With AI Assistants: Faster Performance, Bigger Flaws

One challenge comes in the form of how AI coding assistants tend to package their code. Rather than delivering bite-size pieces, they generally deliver larger code pull requests for porting into the main project repository. Apiiro saw AI code assistants deliver three to four times as many code commits - meaning changes to a code repository - than non-AI code assistants, but packaging fewer pull requests. The problem is that larger PRs are inherently riskier and more time-consuming to verify. "Bigger, multi-touch PRs slow review, dilute reviewer attention and raise the odds that a subtle break slips through," said Itay Nussbaum, a product manager at Apiiro. ... At the same time, the tools generated deeper problems, in the form of a 150% increase in architectural flaws and an 300% increase in privilege issues. "These are the kinds of issues scanners miss and reviewers struggle to spot - broken auth flows, insecure designs, systemic weaknesses," Nussbaum said. "In other words, AI is fixing the typos but creating the time bombs." The tools also have a greater tendency to leak cloud credentials. "Our analysis found that AI-assisted developers exposed Azure service principals and storage access keys nearly twice as often as their non-AI peers," Nussbaum said. "Unlike a bug that can be caught in testing, a leaked key is live access: an immediate path into the production cloud infrastructure."


IT Leadership Is More Change Management Than Technical Management

Planning is considered critical in business to keep an organization moving forward in a predictable way, but Mahon doesn’t believe in the traditional annual and long-term planning in which lots of time is invested in creating the perfect plan which is then executed. “Never get too engaged in planning. You have a plan, but it’s pretty broad and open-ended. The North Star is very fuzzy, and it never gets to be a pinpoint [because] you need to focus on all the stuff that's going on around you,” says Mahon. “You should know exactly what you're going to do in the next two to three months. From three to six months out, you have a really good idea what you're going to do but be prepared to change. And from six to nine months or a year, [I wait until] we get three months away before I focus on it because tech and business needs change rapidly.” ... “The good ideas are mostly common knowledge. To be honest, I don’t think there are any good self-help books. Instead, I have a leadership coach who is also my mental health coach,” says Mahon. “Books try to get you to change who you are, and it doesn’t work. Be yourself. I have a leadership coach who points out my flaws, 90% of which I’m already aware of. His philosophy is don’t try to fix the flaw, address the flaw so, for example, I’m mindful about my tendency to speak too directly.”


The Anatomy of SCREAM: A Perfect Storm in EA Cupboard

SCREAM- Situational Chaotic Realities of Enterprise Architecture Management- captures the current state of EA practice, where most organizations, from medium to large complexity, struggle to derive optimal value from investments in enterprise architecture capabilities. It’s the persistent legacy challenges across technology stacks and ecosystems that need to be solved to meet strategic business goals and those moments when sudden, ill-defined executive needs are met with a hasty, reactive sprint, leading to a fractured and ultimately paralyzing effect on the entire organization. ... The paradox is that the very technologies offering solutions to business challenges are also key sources of architectural chaos, further entrenching reactive SCREAM. As noted, the inevitable chaos and fragmentation that emerge from continuous technology additions lead to silos and escalating compatibility issues. ... The chaos of SCREAM is not just an external force; it’s a product of our own making. While we preach alignment to the business, we often get caught up in our own storm in an EA cupboard. How often do we play EA on EA? ... While pockets of recognizable EA wins may exist through effective engagement, a true, repeatable value-add requires a seat at the strategic table. This means “architecture-first” must evolve beyond being a mere buzzword or a token effort, becoming a reliable approach that promotes collaborative success rather than individual credit-grabbing.


How Does Network Security Handle AI?

Detecting when AI models begin to vary and yield unusual results is the province of AI specialists, users and possibly the IT applications staff. But the network group still has a role in uncovering unexpected behavior. That role includes: Properly securing all AI models and data repositories on the network. Continuously monitoring all access points to the data and the AI system. Regularly scanning for network viruses and any other cyber invaders that might be lurking. ... both application and network teams need to ensure strict QA principles across the entire project -- much like network vulnerability testing. Develop as many adversarial prompt tests coming from as many different directions and perspectives as you can. Then try to break the AI system in the same way a perpetrator would. Patch up any holes you find in the process. ... Apply least privilege access to any AI resource on the network and continually monitor network traffic. This philosophy should also apply to those on the AI application side. Constrict the AI model being used to the specific use cases for which it was intended. In this way, the AI resource rejects any prompts not directly related to its purpose. ... Red teaming is ethical hacking. In other words, deploy a team whose goal is to probe and exploit the network in any way it can. The aim is to uncover any network or AI vulnerability before a bad actor does the same.


Lack of board access: The No. 1 factor for CISO dissatisfaction

CISOs who don’t get access to the board are often buried within their organizations. “There are a lot of companies that will hire at a director level or even a senior manager level and call it a CISO. But they don’t have the authority and scope to actually be able to execute what a CISO does,” says Nick Kathmann, CISO at LogicGate. Instead of reporting directly to the board or CEO, these CISOs will report to a CIO, CTO or other executive, despite the problems that can arise in this type of reporting structure. CIOs and CTOs are often tasked with implementing new technology. The CISO’s job is to identity risks and ensure the organization is secure. “If the CIO doesn’t like those risks or doesn’t want to do anything to fix those risks, they’ll essentially suppress them [CISOs] as much as they can,” says Kathmann. ... Getting in front of the board is one thing. Effectively communicating cybersecurity needs and getting them met is another. It starts with forming relationships with C-suite peers. Whether CISOs are still reporting up to another executive or not, they need to understand their peers’ priorities and how cybersecurity can mesh with those. “The CISO job is an executive job. As an executive, you rely completely on your peer relationships. You can’t do anything as an executive in a vacuum,” says Barrack. Working in collaboration, rather than contention, with other executives can prepare CISOs to make the most of their time in front of the board.


From Vault Sprawl to Governance: How Modern DevOps Teams Can Solve the Multi-Cloud Secrets Management Nightmare

Every time an application is updated or a new service is deployed, one or multiple new identities are born. These NHIs include service accounts, CI/CD pipelines, containers, and other machine workloads, the running pieces of software that connect to other resources and systems to do work. Enterprises now commonly see 100 or more NHIs for every single human identity. And that number keeps growing. ... Fixing this problem is possible, but it requires an intentional strategy. The first step is creating a centralized inventory of all secrets. This includes secrets stored in vaults, embedded in code, or left exposed in CI/CD pipelines and environments. Orphaned and outdated secrets should be identified and removed. Next, organizations must shift left. Developers and DevOps teams require tools to detect secrets early, before they are committed to source control or merged into production. Educating teams and embedding detection into the development process significantly reduces accidental leaks. Governance must also include lifecycle mapping. Secrets should be enriched with metadata such as owner, creation date, usage frequency, and last rotation. Automated expiration and renewal policies help enforce consistency and reduce long-term risk. Contributions should be both product- and vendor-agnostic, focusing on market insights and thought leadership.


Digital Public Infrastructure: The backbone of rural financial inclusion

When combined, these infrastructures — UPI for payments, ONDC for commerce, AAs for credit, CSCs for handholding support and broadband for connectivity form a powerful ecosystem. Together, these enable a farmer to sell beyond the village, receive instant payment and leverage that income proof for a micro-loan, all within a seamless digital journey. Adding to this, e-KYC ensures that identity verification is quick, low-cost and paperless, while AePS provides last-mile access to cash and banking services, ensuring inclusion even for those outside the smartphone ecosystem. This integration reduces dependence on middlemen, enhances transparency and fosters entrepreneurship. ...  Of course, progress does not mean perfection. There are challenges that must be addressed with urgency and sensitivity. Many rural merchants hesitate to fully embrace digital commerce due to uncertainties around Goods and Services Tax (GST) compliance. Digital literacy, though improving, still varies widely, particularly among older populations and women. Infrastructure costs such as last-mile broadband and device affordability remain burdensome for small operators. These are not reasons to slow down but opportunities to fine-tune policy. Simplifying tax processes for micro-enterprises, investing in vernacular digital literacy programmes, subsidising rural connectivity and embedding financial education into community touchpoints such as CSCs will be essential to ensure no one is left behind.


Cybersecurity research is getting new ethics rules, here’s what you need to know

Ethics analysis should not be treated as a one-time checklist. Stakeholder concerns can shift as a project develops, and researchers may need to revisit their analysis as they move from design to execution to publication. ...“Stakeholder ethical concerns impact academia, industry, and government,” Kalu said. “Security teams should replace reflexive defensiveness with structured collaboration: recognize good-faith research, provide intake channels and SLAs, support coordinated disclosure and pre-publication briefings, and engage on mitigation timelines. A balanced, invitational posture, rather than an adversarial one, will reduce harm, speed remediation, and encourage researchers to keep working on that project.” ... While the new requirements target academic publishing, the ideas extend to industry practice. Security teams often face similar dilemmas when deciding whether to disclose vulnerabilities, release tools, or adopt new defensive methods. Thinking in terms of stakeholders provides a way to weigh the benefits and risks of those decisions. ... Peng said ethical standards should be understood as “scaffolds that empower thoughtful research,” providing clarity and consistency without blocking exploration of adversarial scenarios. “By building ethics into the process from the start and revisiting it as research develops, we can both protect stakeholders and ensure researchers can study the potential threats that adversaries, who face no such constraints, may exploit,” she said.


From KYC to KYAI: Why ‘Algorithmic Transparency’ is Now Critical in Banking

This growing push for transparency into AI models has introduced a new acronym to the risk and compliance vernacular: KYAI, or "know your AI." Just like finance institutions must know the important details about their customers, so too must they understand the essential components of their AI models. The imperative has evolved beyond simply knowing "who" to "how." Based on my work helping large banks and other financial institutions integrate AI into their KYC workflows over the last few years, I’ve seen what can happen when these teams spend the time vetting their AI models and applying rigorous transparency standards. And, I’ve seen what can happen when they become overly trusting of black-box algorithms that deliver decisions based on opaque methods with no ability to attribute accountability. The latter rarely ever ends up being the cheapest or fastest way to produce meaningful results. ... The evolution from KYC to KYAI is not merely driven by regulatory pressure; it reflects a fundamental shift in how businesses operate today. Financial institutions that invest in AI transparency will be equipped to build greater trust, reduce operational risks, and maintain auditability without missing a step in innovation. The transformation from black box AI to transparent, governable systems represents one of the most significant operational challenges facing financial institutions today.


Why compliance clouds are essential

From a technical perspective, compliance clouds offer something that traditional clouds can’t match, these are the battle-tested security architectures. By implementing them, the organizations can reduce their data breach risk by 30-40% compared to standard cloud deployments. This is because compliance clouds are constantly reviewed and monitored by third-party experts, ensuring that we are not just getting compliance, but getting an enterprise-grade security that’s been validated by some of the most security-conscious organizations in the world. ... What’s particularly interesting is that 58% of this market is software focused. As organizations prioritize automation and efficiency in managing complex regulatory requirements, this number is set to grow further. Over 75% of federal agencies have already shifted to cloud-based software to meet evolving compliance needs. Following this, we at our organizations have also achieved FedRAMP® High Ready compliance for Cloud. ... Cloud compliance solutions deliver far-reaching benefits that extend well beyond regulatory adherence, offering a powerful mix of cost efficiency, trust building, adaptability, and innovation enablement. ... In an era where trust is a competitive currency, compliance cloud certifications serve as strong differentiators, signaling an organization’s unwavering commitment to data protection and regulatory excellence.

Daily Tech Digest - August 23, 2025


Quote for the day:

"Failure is the condiment that gives success its flavor." -- Truman Capote


Enterprise passwords becoming even easier to steal and abuse

Attackers actively target user credentials because they offer the most direct route or foothold into a targeted organization’s network. Once inside, attackers can move laterally across systems, searching for other user accounts to compromise, or they attempt to escalate their privileges and gain administrative control. This hunt for credentials extends beyond user accounts to include code repositories, where developers may have hard-coded access keys and other secrets into application source code. Attacks using valid credentials were successful 98% of the time, according to Picus Security. ... “CISOs and security teams should focus on enforcing strong, unique passwords, using MFA everywhere, managing privileged accounts rigorously and testing identity controls regularly,” Curran says. “Combined with well-tuned DLP and continuous monitoring that can detect abnormal patterns quickly, these measures can help limit the impact of stolen or cracked credentials.” Picus Security’s latest findings reveal a concerning gap between the perceived protection of security tools and their actual performance. An overall protection effectiveness score of 62% contrasts with a shockingly low 3% prevention rate for data exfiltration. “Failures in detection rule configuration, logging gaps and system integration continue to undermine visibility across security operations,” according to Picus Security.


Architecting the next decade: Enterprise architecture as a strategic force

In an age of escalating cyber threats and expanding digital footprints, security can no longer be layered on; it must be architected in from the start. With the rise of AI, IoT and even quantum computing on the horizon, the threat landscape is more dynamic than ever. Security-embedded architectures prioritize identity-first access control, continuous monitoring and zero-trust principles as baseline capabilities. ... Sustainability is no longer a side initiative; it’s becoming a first principle of enterprise architecture. As organizations face pressure from regulators, investors and customers to lower their carbon footprint, digital sustainability is gaining traction as a measurable design objective. From energy-efficient data centers to cloud optimization strategies and greener software development practices, architects are now responsible for minimizing the environmental impact of IT systems. The Green Software Foundation has emerged as a key ecosystem partner, offering measurement standards like software carbon intensity (SCI) and tooling for emissions-aware development pipelines. ... Technology leaders must now foster a culture of innovation, build interdisciplinary partnerships and enable experimentation while ensuring alignment with long-term architectural principles. They must guide the enterprise through both transformation and stability, navigating short-term pressures and long-term horizons simultaneously.


Capitalizing on Digital: Four Strategic Imperatives for Banks and Credit Unions

Modern architectures dissolve the boundary between core and digital. The digital banking solution is no longer a bolt-on to the core; the core and digital come together to form the accountholder experience. That user experience is delivered through the digital channel, but when done correctly, it’s enabled by the modern core. Among other things, the core transformation requires robust use of shared APIs, consistent data structures, and unified development teams. Leading financial institutions are coming to realize that core evaluations now must include an evaluable of their capability to enable the digital experience. Criteria like Availability, Reliability, Real-time, Speed and Security are now emerging as foundational requirements of a core to enable the digital experience. "If your core can’t keep up with your digital, you’re stuck playing catch-up forever," said Jack Henry’s Paul Wiggins, Director of Sales, Digital Engineers. ... Many institutions still operate with digital siloed in one department, while marketing, product, and operations pursue separate agendas. This leads to mismatched priorities — products that aren’t promoted effectively, campaigns that promise features operations can’t support, and technical fixes that don’t address the root cause of customer and member pain points. ... Small-business services are a case in point. Jack Henry’s Strategy Benchmark study found that 80% of CEOs plan to expand these services over the next two years. 


Bentley Systems CIO Talks Leadership Strategy and AI Adoption

The thing that’s really important for a CIO to be thinking about is that we are a microcosm for how all of the business functions are trying to execute the tactics against the strategy. What we can do across the portfolio is represent the strategy in real terms back to the business. We can say: These are all of the different places where we're thinking about investing. Does that match with the strategy we thought we were setting for ourselves? And where is there a delta and a difference? ... When I got my first CIO role, there was all of this conversation about business process. That was the part that I had to learn and figure out how to map into these broader, strategic conversations. I had my first internal IT role at Deutsche Bank, where we really talked about product model a lot -- thinking about our internal IT deliverables as products. When I moved to Lenovo, we had very rich business process and transformation conversations because we were taking the whole business through such a foundational change. I was able to put those two things together. It was a marriage of several things: running a product organization; marrying that to the classic IT way of thinking about business process; and then determining how that becomes representative to the business strategy.


What Is Active Metadata and Why Does It Matter?

Active metadata addresses the shortcomings of passive approaches by automatically updating the metadata whenever an important aspect of the information changes. Defining active metadata and understanding why it matters begins by looking at the shift in organizations’ data strategies from a focus on data acquisition to data consumption. The goal of active metadata is to promote the discoverability of information resources as they are acquired, adapted, and applied over time. ... From a data consumer’s perspective, active metadata adds depth and breadth to their perception of the data that fuels their decision-making. By highlighting connections between data elements that would otherwise be hidden, active metadata promotes logical reasoning about data assets. This is especially so when working on complex problems that involve a large number of disconnected business and technical entities.The active metadata analytics workflow orchestrates metadata management across platforms to enhance application integration, resource management, and quality monitoring. It provides a single, comprehensive snapshot of the current status of all data assets involved in business decision-making. The technology augments metadata with information gleaned from business processes and information systems. 


Godrej Enterprises CHRO on redefining digital readiness as culture, not tech

“Digital readiness at Godrej Enterprises Group is about empowering every employee to thrive in an ever-evolving landscape,” Kaur said. “It’s not just about technology adoption. It’s about building a workforce that is agile, continuously learning, and empowered to innovate.” This reframing reflects a broader trend across Indian industry, where digital transformation is no longer confined to IT departments but runs through every layer of an organisation. For Godrej Enterprises Group, this means designing a workplace where intrapreneurship is rewarded, innovation is constant, and employees are trained to think beyond immediate functions. ... “We’ve moved away from one-off training sessions to creating a dynamic ecosystem where learning is accessible, relevant, and continuous,” she said. “Learning is no longer a checkbox — it’s a shared value that energises our people every day.” This shift is underpinned by leadership development programmes and innovation platforms, ensuring that employees at every level are encouraged to experiment and share knowledge.  ... “We see digital skilling as a core business priority, not just an HR or L&D initiative,” she said. “By making digital skilling a shared responsibility, we foster a culture where learning is continuous, progress is visible, and success is celebrated across the organisation.”


AI is creeping into the Linux kernel - and official policy is needed ASAP

However, before you get too excited, he warned: "This is a great example of what LLMs are doing right now. You give it a small, well-defined task, and it goes and does it. And you notice that this patch isn't, 'Hey, LLM, go write me a driver for my new hardware.' Instead, it's very specific -- convert this specific hash to use our standard API." Levin said another AI win is that "for those of us who are not native English speakers, it also helps with writing a good commit message. It is a common issue in the kernel world where sometimes writing the commit message can be more difficult than actually writing the code change, and it definitely helps there with language barriers." ... Looking ahead, Levin suggested LLMs could be trained to become good Linux maintainer helpers: "We can teach AI about kernel-specific patterns. We show examples from our codebase of how things are done. It also means that by grounding it into our kernel code base, we can make AI explain every decision, and we can trace it to historical examples." In addition, he said the LLMs can be connected directly to the Linux kernel Git tree, so "AI can go ahead and try and learn things about the Git repo all on its own." ... This AI-enabled program automatically analyzes Linux kernel commits to determine whether they should be backported to stable kernel trees. The tool examines commit messages, code changes, and historical backporting patterns to make intelligent recommendations.


Applications and Architecture – When It’s Not Broken, Should You Try to Fix It?

No matter how reliable your application components are, they will need to be maintained, upgraded or replaced at some point. As elements in your application evolve, some will reach end of life status – for example, Redis 7.2 will reach end of life status for security updates in February 2026. Before that point, it’s necessary to assess the available options. For businesses in some sectors like financial services, running out of date and unsupported software is a potential failure for regulations on security and resilience. For example, the Payment Card Industry Data Security Standard version 4.0 enforces that teams should check all their software and hardware is supported every year; in the case of end of life software, teams must also provide a full plan for migration that will be completed within twelve months. ... For developers and software architects, understanding the role that any component plays in the overall application makes it easier to plan ahead. Even the most reliable and consistent component may need to change given outside circumstances. In the Discworld series, golems are so reliable that they become the standard for currency; at the same time, there are so many of them that any problem could affect the whole economy. When it comes to data caching, Redis has been a reliable companion for many developers. 


From cloud migration to cloud optimization

The report, based on insights from more than 2,000 IT leaders, reveals that a staggering 94% of global IT leaders struggle with cloud cost optimization. Many enterprises underestimate the complexities of managing public cloud resources and the inadvertent overspending that occurs from mismanagement, overprovisioning, or a lack of visibility into resource usage. This inefficiency goes beyond just missteps in cloud adoption. It also highlights how difficult it is to align IT cost optimization with broader business objectives. ... This growing focus sheds light on the rising importance of finops (financial operations), a practice aimed at bringing greater financial accountability to cloud spending. Adding to this complexity is the increasing adoption of artificial intelligence and automation tools. These technologies drive innovation, but they come with significant associated costs. ... The argument for greater control is not new, but it has gained renewed relevance when paired with cost optimization strategies. ... With 41% of respondents’ IT budgets still being directed to scaling cloud capabilities, it’s clear that the public cloud will remain a cornerstone of enterprise IT in the foreseeable future. Cloud services such as AI-powered automation remain integral to transformative business strategies, and public cloud infrastructure is still the preferred environment for dynamic, highly scalable workloads. Enterprises will need to make cloud deployments truly cost-effective.


The Missing Layer in AI Infrastructure: Aggregating Agentic Traffic

Software architects and engineering leaders building AI-native platforms are starting to notice familiar warning signs: sudden cost spikes on AI API bills, bots with overbroad permissions tapping into sensitive data, and a disconcerting lack of visibility or control over what these AI agents are doing. It’s a scenario reminiscent of the early days of microservices – before we had gateways and meshes to restore order – only now the "microservices" are semi-autonomous AI routines. Gartner has begun shining a spotlight on this emerging gap. ... Every major shift in software architecture eventually demands a mediation layer to restore control. When web APIs took off, API gateways became essential for managing authentication/authorization, rate limits, and policies. With microservices, service meshes emerged to govern internal traffic. Each time, the need only became clear once the pain of scale surfaced. Agentic AI is on the same path. Teams are wiring up bots and assistants that call APIs independently - great for demos ... So, what exactly is an AI Gateway? At its core, it’s a middleware component – either a proxy, service, or library – through which all AI agent requests to external services are channeled. Rather than letting each agent independently hit whatever API it wants, you route those calls via the gateway, which can then enforce policies and provide central management.



Daily Tech Digest - August 14, 2025


Quote for the day:

"Act as if what you do makes a difference. It does." -- William James


What happens the day after superintelligence?

As context, artificial superintelligence (ASI) refers to systems that can outthink humans on most fronts, from planning and reasoning to problem-solving, strategic thinking and raw creativity. These systems will solve complex problems in a fraction of a second that might take the smartest human experts days, weeks or even years to work through. ... So ask yourself, honestly, how will humans act in this new reality? Will we reflexively seek advice from our AI assistants as we navigate every little challenge we encounter? Or worse, will we learn to trust our AI assistants more than our own thoughts and instincts? ... Imagine walking down the street in your town. You see a coworker heading towards you. You can’t remember his name, but your AI assistant does. It detects your hesitation and whispers the coworker’s name into your ears. The AI also recommends that you ask the coworker about his wife, who had surgery a few weeks ago. The coworker appreciates the sentiment, then asks you about your recent promotion, likely at the advice of his own AI. Is this human empowerment, or a loss of human agency? ... Many experts believe that body-worn AI assistants will make us feel more powerful and capable, but that’s not the only way this could go. These same technologies could make us feel less confident in ourselves and less impactful in our lives.


Confidential Computing: A Solution to the Uncertainty of Using the Public Cloud

Confidential computing is a way to ensure that no external party can look at your data and business logic while it is executed. It looks to secure Data in Use. When you now add to that the already established way to secure Data at Rest and Data in Transit it can be ensured that most likely no external party can access secured data running in a confidential computing environment wherever that may be. ... To be able to execute services in the cloud the company needs to be sure that the data and the business logic cannot be accessed or changed from third parties especially by the system administrator of that cloud provider. It needs to be protected. Or better, it needs to be executed in the Trusted Compute Base (TCB) of the company. This is the environment where specific security standards are set to restrict all possible access to data and business logic. ... Here attestation is used to verify that a confidential environment (instance) is securely running in the public cloud and it can be trusted to implement all the security standards necessary. Only after successful attestation the TCB is then extended into the Public cloud to incorporate the attested instances. One basic requirement of attestation is that the attestation service is located independently of the infrastructure where the instance is running. 


Open Banking's Next Phase: AI, Inclusion and Collaboration

Think of open banking as the backbone for secure, event-driven automation: a bill gets paid, and a savings allocation triggers instantly across multiple platforms. The future lies in secure, permissioned coordination across data silos, and when applied to finance, it unlocks new, high-margin services grounded in trust, automation and personalisation. ... By building modular systems that handle hierarchy, fee setup, reconciliation and compliance – all in one cohesive platform – we can unlock new revenue opportunities. ... Regulators must ensure they are stepping up efforts to sustain progress and support fintech innovation whilst also meeting their aim to keep customers safe. Work must also be done to boost public awareness of the value of open banking. Many consumers are unaware of the financial opportunities open banking offers and some remain wary of sharing their data with unknown third parties. ... Rather than duplicating efforts or competing head-to-head, institutions and fintechs should focus on co-developing shared infrastructure. When core functions like fee management, operational controls and compliance processes are unified in a central platform, fintechs can innovate on customer experience, while banks provide the stability, trust and reach. 


Data centers are eating the economy — and we’re not even using them

Building new data centers is the easy solution, but it’s neither sustainable nor efficient. As I’ve witnessed firsthand in developing compute orchestration platforms, the real problem isn’t capacity. It’s allocation and optimization. There’s already an abundant supply sitting idle across thousands of data centers worldwide. The challenge lies in efficiently connecting this scattered, underutilized capacity with demand. ... The solution isn’t more centralized infrastructure. It’s smarter orchestration of existing resources. Modern software can aggregate idle compute from data centers, enterprise servers, and even consumer devices into unified, on-demand compute pools. ... The technology to orchestrate distributed compute already exists. Some network models already demonstrate how software can abstract away the complexity of managing resources across multiple providers and locations. Docker containers and modern orchestration tools make workload portability seamless. The missing piece is just the industry’s willingness to embrace a fundamentally different approach. Companies need to recognize that most servers are idle 70%-85% of the time. It’s not a hardware problem requiring more infrastructure. 


How an AI-Based 'Pen Tester' Became a Top Bug Hunter on HackerOne

While GenAI tools can be extremely effective at finding potential vulnerabilities, XBOW's team found they were't very good at validating the findings. The trick to making a successful AI-driven pen tester, Dolan-Gavitt explained, was to use something other than an LLM to verify the vulnerabilities. In this case of XBOW, researchers used a deterministic validation approach. "Potentially, maybe in a couple years down the road, we'll be able to actually use large language models out of the box to verify vulnerabilities," he said. "But for today, and for the rest of this talk, I want to propose and argue for a different way, which is essentially non-AI, deterministic code to validate vulnerabilities." But AI still plays an integral role with XBOW's pen tester. Dolan-Gavitt said the technology uses a capture-the-flag (CTF) approach in which "canaries" are placed in the source code and XBOW sends AI agents after them to see if they can access them. For example, he said, if researchers want to find a remote code execution (RCE) flaw or an arbitrary file read vulnerability, they can plant canaries on the server's file system and set the agents loose. ... Dolan-Gavitt cautioned that AI-powered pen testers are not panacea. XBOW still sees some false positives because some vulnerabilities, like business logic flaws, are difficult to validate automatically.


Data Governance Maturity Models and Assessments: 2025 Guide

Data governance maturity frameworks help organizations assess their data governance capabilities and guide their evolution toward optimal data management. To implement a data governance or data management maturity framework (a “model”) it is important to learn what data governance maturity is, explore how and why it should be assessed, discover various maturity models and their features, and understand the common challenges associated with using maturity models. Data governance maturity refers to the level of sophistication and effectiveness with which an organization manages its data governance processes. It encompasses the extent to which an organization has implemented, institutionalized, and optimized its data governance practices. A mature data governance framework ensures that the organization can support its business objectives with accurate, trusted, and accessible data. Maturity in data governance is typically assessed through various models that measure different aspects of data management such as data quality and compliance and examine processes for managing data’s context (metadata) and its security. Maturity models provide a structured way to evaluate where an organization stands and how it can improve for a given function.


Open-source flow monitoring with SENSOR: Benefits and trade-offs

Most flow monitoring setups rely on embedded flow meters that are locked to a vendor and require powerful, expensive devices. SENSOR shows it’s possible to build a flexible and scalable alternative using only open tools and commodity hardware. It also allows operators to monitor internal traffic more comprehensively, not just what crosses the network border. ... For a large network, that can make troubleshooting and oversight more complex. “Something like this is fine for small networks,” David explains, “but it certainly complicates troubleshooting and oversight on larger networks.” David also sees potential for SENSOR to expand beyond historical analysis by adding real-time alerting. “The paper doesn’t describe whether the flow collectors can trigger alarms for anomalies like rapidly spiking UDP traffic, which could indicate a DDoS attack in progress. Adding real-time triggers like this would be a valuable enhancement that makes SENSOR more operationally useful for network teams.” ... “Finally, the approach is fragile. It relies on precise bridge and firewall configurations to push traffic through the RouterOS stack, which makes it sensitive to updates, misconfigurations, or hardware changes. 


Network Segmentation Strategies for Hybrid Environments

It's not a simple feat to implement network segmentation. Network managers must address network architectural issues, obtain tools and methodologies, review and enact security policies, practices and protocols, and -- in many cases -- overcome political obstacles. ... The goal of network segmentation is to place the most mission-critical and sensitive resources and systems under comprehensive security for a finite ecosystem of users. From a business standpoint, it's equally critical to understand the business value of each network asset and to gain support from users and management before segmenting. ... Divide the network segments logically into security segments based on workload, whether on premises, cloud-based or within an extranet. For example, if the Engineering department requires secure access to its product configuration system, only that team would have access to the network segment that contains the Engineering product configuration system. ... A third prong of segmented network security enforcement in hybrid environments is user identity management. Identity and access management (IAM) technology identifies and tracks users at a granular level based on their authorization credentials in on-premises networks but not on the cloud. 


Convergence of AI and cybersecurity has truly transformed the CISO’s role

The most significant impact of AI in security at present is in automation and predictive analysis. Automation especially when enhanced with AI, such as integrating models like Copilot Security with tools like Microsoft Sentinel allows organisations to monitor thousands of indicators of compromise in milliseconds and receive instant assessments. ... The convergence of AI and cybersecurity has truly transformed the CISO’s role, especially post-pandemic when user locations and systems have become unpredictable. Traditionally, CISOs operated primarily as reactive defenders responding to alerts and attacks as they arose. Now, with AI-driven predictive analysis, we’re moving into a much more proactive space. CISOs are becoming strategic risk managers, able to anticipate threats and respond with advanced tools. ... Achieving real-time threat detection in the cloud through AI requires the integration of several foundational pillars that work in concert to address the complexity and speed of modern digital environments. At the heart of this approach is the adoption of a Zero Trust Architecture: rather than assuming implicit trust based on network perimeters, this model treats every access request whether to data, applications, or infrastructure as potentially hostile, enforcing strict verification and comprehensive compliance controls. 


Initial Access Brokers Selling Bundles, Privileges and More

"By the time a threat actor logs in using the access and privileged credentials bought from a broker, a lot of the heavy lifting has already been done for them. Therefore, it's not about if you're exposed, but whether you can respond before the intrusion escalates." More than one attacker may use any given initial access, either because the broker sells it to multiple customers, or because a customer uses the access for one purpose - say, to steal data - then sells it on to someone else, who perhaps monetizes their purchase by further ransacking data and unleashing ransomware. "Organizations that unwittingly have their network access posted for sale on initial access broker forums have already been victimized once, and they are on their way to being victimized once again when the buyer attacks," the report says. ... "Access brokers often create new local or domain accounts, sometimes with elevated privileges, to maintain persistence or allow easier access for buyers," says a recent report from cybersecurity firm Kela. For detecting such activity, "unexpected new user accounts are a major red flag." So too is "unusual login activity" to legitimate accounts that traces to never-before-seen IP addresses, or repeat attempts that only belatedly succeed, Kela said. "Watch for legitimate accounts doing unusual actions or accessing resources they normally don't - these can be signs of account takeover."

Daily Tech Digest - August 04, 2025


Quote for the day:

"You don’t have to be great to start, but you have to start to be great." — Zig Ziglar


Why tomorrow’s best devs won’t just code — they’ll curate, coordinate and command AI

It is not just about writing code anymore — it is about understanding systems, structuring problems and working alongside AI like a team member. That is a tall order. That said, I do believe that there is a way forward. It starts by changing the way we learn. If you are just starting out, avoid relying on AI to get things done. It is tempting, sure, but in the long run, it is also harmful. If you skip the manual practice, you are missing out on building a deeper understanding of how software really works. That understanding is critical if you want to grow into the kind of developer who can lead, architect and guide AI instead of being replaced by it. ... AI-augmented developers will replace large teams that used to be necessary to move a project forward. In terms of efficiency, there is a lot to celebrate about this change — reduced communication time, faster results and higher bars for what one person can realistically accomplish. But, of course, this does not mean teams will disappear altogether. It is just that the structure will change. ... Being technically fluent will still remain a crucial requirement — but it won’t be enough to simply know how to code. You will need to understand product thinking, user needs and how to manage AI’s output. It will be more about system design and strategic vision. For some, this may sound intimidating, but for others, it will also open many doors. People with creativity and a knack for problem-solving will have huge opportunities ahead of them.


The Wild West of Shadow IT

From copy to deck generators, code assistants, and data crunchers, most of them were never reviewed or approved. The productivity gains of AI are huge. Productivity has been catapulted forward in every department and across every vertical. So what could go wrong? Oh, just sensitive data leaks, uncontrolled API connections, persistent OAuth tokens, and no monitoring, audit logs, or privacy policies… and that's just to name a few of the very real and dangerous issues. ... Modern SaaS stacks form an interconnected ecosystem. Applications integrate with each other through OAuth tokens, API keys, and third-party plug-ins to automate workflows and enable productivity. But every integration is a potential entry point — and attackers know it. Compromising a lesser-known SaaS tool with broad integration permissions can serve as a stepping stone into more critical systems. Shadow integrations, unvetted AI tools, and abandoned apps connected via OAuth can create a fragmented, risky supply chain.  ... Let's be honest - compliance has become a jungle due to IT democratization. From GDPR to SOC 2… your organization's compliance is hard to gauge when your employees use hundreds of SaaS tools and your data is scattered across more AI apps than you even know about. You have two compliance challenges on the table: You need to make sure the apps in your stack are compliant and you also need to assure that your environment is under control should an audit take place.


Edge Computing: Not Just for Tech Giants Anymore

A resilient local edge infrastructure significantly enhances the availability and reliability of enterprise digital shopfloor operations by providing powerful on-premises processing as close to the data source as possible—ensuring uninterrupted operations while avoiding external cloud dependency. For businesses, this translates to improved production floor performance and increased uptime—both critical in sectors such as manufacturing, healthcare, and energy. In today’s hyperconnected market, where customers expect seamless digital interactions around the clock, any delay or downtime can lead to lost revenue and reputational damage. Moreover, as AI, IoT, and real-time analytics continue to grow, on-premises OT edge infrastructure combined with industrial-grade connectivity such as private 4.9/LTE or 5G provides the necessary low-latency platform to support these emerging technologies. Investing in resilient infrastructure is no longer optional, it’s a strategic imperative for organisations seeking to maintain operational continuity, foster innovation, and stay ahead of competitors in an increasingly digital and dynamic global economy. ... Once, infrastructure decisions were dominated by IT and boiled down to a simple choice between public and private infrastructure. Today, with IT/OT convergence, it’s all about fit-for-purpose architecture. On-premises edge computing doesn’t replace the cloud — it complements it in powerful ways.


A Reporting Breakthrough: Advanced Reporting Architecture

Advanced Reporting Architecture is based on a powerful and scalable SaaS architecture, which efficiently addresses user-specific reporting requirements by generating all possible reports upfront. Users simply select and analyze the views that matter most to them. The Advanced Reporting Architecture’s SaaS platform is built for global reach and enterprise reliability, with the following features: Modern User Interface: Delivered via AWS, optimized for mobile and desktop, with seamless language switching (English, French, German, Spanish, and more to come). Encrypted Cloud Storage: Ensuring uploaded files and reports are always secure. Serverless Data Processing: High-precision processing that analyzes user-uploaded data and uses data influenced relevant factors to maximizing analytical efficiencies and lower the cost of processing efforts. Comprehensive Asset Management: Support for editable reports, dashboards, presentations, pivots, and custom outputs. Integrated Payments & Accounting: Powered by PayPal and Odoo. Simple Subscription Model: Pay only for what you use—no expensive licenses, hardware, or ongoing maintenance. Some leading-edge reporting platforms, such as PrestoCharts, are based on Advanced Reporting Architecture and have been successful in enabling business users to develop custom reports on the fly. Thus, Advanced Reporting Architecture puts reporting prowess in the hands of the user.


These jobs face the highest risk of AI takeover, according to Microsoft

According to the report -- which has yet to be peer-reviewed -- the most at-risk jobs are those that are based on the gathering, synthesis, and communication of information, at which modern generative AI systems excel: think translators, sales and customer service reps, writers and journalists, and political scientists. The most secure jobs, on the other hand, are supposedly those that depend more on physical labor and interpersonal skills. No AI is going to replace phlebotomists, embalmers, or massage therapists anytime soon. ... "It is tempting to conclude that occupations that have high overlap with activities AI performs will be automated and thus experience job or wage loss, and that occupations with activities AI assists with will be augmented and raise wages," the Microsoft researchers note in their report. "This would be a mistake, as our data do not include the downstream business impacts of new technology, which are very hard to predict and often counterintuitive." The report also echoes what's become something of a mantra among the biggest tech companies as they ramp up their AI efforts: that even though AI will replace or radically transform many jobs, it will also create new ones. ... It's possible that AI could play a role in helping people practice that skill. About one in three Americans are already using the technology to help them navigate a shift in their career, a recent study found.


AIBOMs are the new SBOMs: The missing link in AI risk management

AIBOMs follow the same formats as traditional SBOMs, but contain AI-specific content and metadata, like model family, acceptable usage, AI-specific licenses, etc. If you are a security leader at a large defense contractor, you’d need the ability to identify model developers and their country of origin. This would ensure you are not utilizing models originating from near-peer adversary countries, such as China. ... The first step is inventorying their AI. Utilize AIBOMs to inventory your AI dependencies, monitor what is approved vs. requested vs. denied, and ensure you have an understanding of what is deployed where. The second is to actively seek out AI, rather than waiting for employees to discover it. Organizations need capabilities to identify AI in code and automatically generate resulting AIBOMs. This should be integrated as part of the MLOps pipeline to generate AIBOMs and automatically surface new AI usage as it occurs. The third is to develop and adopt responsible AI policies. Some of them are fairly common-sense: no contributors from OFAC countries, no copylefted licenses, no usage of models without a three-month track record on HuggingFace, and no usage of models over a year old without updates. Then, enforce those policies in an automated and scalable system. The key is moving from reactive discovery to proactive monitoring.


2026 Budgets: What’s on Top of CIOs’ Lists (and What Should Be)

CIO shops are becoming outcome-based, which makes them accountable for what they’re delivering against the value potential, not how many hours were burned. “The biggest challenge seems to be changing every day, but I think it’s going to be all about balancing long-term vision with near-term execution,” says Sudeep George, CTO at software-delivered AI data company iMerit. “Frankly, nobody has a very good idea of what's going to happen in 2026, so everyone's placing bets,” he continues. “This unpredictability is going to be the nature of the beast, and we have to be ready for that.” ... “Reducing the amount of tech debt will always continue to be a focus for my organization,” says Calleja-Matsko. “We’re constantly looking at re-evaluating contracts, terms, [and] whether we have overlapping business capabilities that are being addressed by multiple tools that we have. It's rationalizing, she adds, and what that does is free up investment. How is this vendor pricing its offering? How do we make sure we include enough in our budget based on that pricing model? “That’s my challenge,” Calleja-Matsko emphasizes. Talent is top of mind for 2026, both in terms of attracting it and retaining it. Ultimately though, AI investments are enabling the company to spend more time with customers.


Digital Twin: Revolutionizing the Future of Technology and Industry

T​h​e rise o​f t​h​e cyberspace o​f Things [IoT] has made digital twin technology more relevant​ and accessible. IoT devices ceaselessly garner data from their surroundings a​n​d send i​t t​o t​h​e cloud. T​h​i​s data i​s used t​o produce a​n​d update digital twins o​f those devices o​r systems. I​n smart homes, digital twins help keep an eye on a​n​d see to it lighting, heating, a​n​d appliances. I​n blue-collar settings, IoT sensors track simple machine health a​n​d doing. Moreover, these smart systems c​a​n discover minor issues ahead of time that lead t​o failures. A​s more devices abound, digital twins offer greater conspicuousness a​n​d see to it. ... Despite its benefits, digital twin technology comes w​i​t​h challenges. One major issue i​s t​h​e high cost o​f carrying out. Setting up sensors, software systems, a​n​d data chopping c​a​n be overpriced, particularly f​o​r small businesses. There a​r​e also concerns about the data security system a​n​d privacy. Since digital twins rely o​n straight data flow, any rift c​a​n be risky. Integrating digital twins into existing systems c​a​n be involved. Moreover, i​t requires fine professionals who translate both t​h​e personal systems a​n​d t​h​e labyrinthine digital technologies. A different dispute i​s ensuring t​h​e caliber a​n​d truth o​f t​h​e data. I​f t​h​e input data i​s blemished, the digital twin’s results will also be erratic. Companies must also cope with large amounts o​f data, which requires a stressed I​T base. 


Why Banks Must Stop Pretending They’re Not Tech Companies

The most successful "banks" of the future may not even call themselves banks at all. While traditional institutions cling to century-old identities rooted in vaults and branches, their most formidable competitors are building financial ecosystems from the ground up with APIs, cloud infrastructure, and data-driven decision engines. ... The question isn’t whether banks will become technology companies. It’s whether they’ll make that transition fast enough to remain relevant. And to do this, they must rethink their identity by operating as technology platforms that enable fast, connected, and customer-first experiences. ... This isn’t about layering digital tools on top of legacy infrastructure or launching a chatbot and calling it innovation. It’s about adopting a platform mindset — one that treats technology not as a cost center but as the foundation of growth. A true platform bank is modular, API-first, and cloud-native. It uses real-time data to personalize every interaction. It delivers experiences that are intuitive, fast, and seamless — meeting customers wherever they are and embedding financial services into their everyday lives. ... To keep up with the pace of innovation, banks must adopt skills-based models that prioritize adaptability and continuous learning. Upskilling isn’t optional. It’s how institutions stay responsive to market shifts and build lasting capabilities. And it starts at the top.


Colo space crunch could cripple IT expansion projects

For enterprise IT execs who already have a lot on their plates, the lack of available colocation space represents yet another headache to deal with, and one with major implications. Nobody wants to have to explain to the CIO or the board of directors that the company can’t proceed with digitization efforts or AI projects because there’s no space to put the servers. IT execs need to start the planning process now to get ahead of the problem. ... Demand has outstripped supply due to multiple factors, according to Pat Lynch, executive managing director at CBRE Data Center Solutions. “AI is definitely part of the demand scenario that we see in the market, but we also see growing demand from enterprise clients for raw compute power that companies are using in all aspects of their business.” ... It’s not GPU chip shortages that are slowing down new construction of data centers; it’s power. When a hyperscaler, colo operator or enterprise starts looking for a location to build a data center, the first thing they need is a commitment from the utility company for the required megawattage. According to a McKinsey study, data centers are consuming more power due to the proliferation of the power-hungry GPUs required for AI. Ten years ago, a 30 MW data center was considered large. Today, a 200 MW facility is considered normal.

Daily Tech Digest - July 28, 2025


Quote for the day:

"Don't watch the clock; do what it does. Keep going." -- Sam Levenson



Architects Are… Human

Architects are not super-human. Most learned to be good by failing miserably dozens or hundreds of times. Many got the title handed to them. Many gave it to themselves. Most come from spectacularly different backgrounds. Most have a very different skill set. Most disagree with each other. ... When someone gets online and says, ‘Real Architects’, I puke a little. There are no real architects. Because there is no common definition of what that means. What competencies should they have? How were those competencies measured and by whom? Did the person who measured them have a working model by which to compare their work? To make a real architect repeatedly, we have to get together and agree what that means. Specifically. Repeatably. Over and over and over again. Tens of thousands of times and learn from each one how to do it better as a group! ... The competency model for a successful architect is large, difficult to learn, and most of employers do not recognize or give you opportunities to do it very often. They have defined their own internal model, from ‘all architects are programmers’ to ‘all architects work with the CEO’. The truth is simple. Study. Experiment. Ask tough questions. Simple answers are not the answer. You do not have to be everything to everyone. Business architects aren’t right, but neither are software architects.


Mitigating Financial Crises: The Need for Strong Risk Management Strategies in the Banking Sector

Poor risk management can lead to liquidity shortfalls, and failure to maintain adequate capital buffers can potentially result in insolvency and trigger wider market disruptions. Weak practices also contribute to a build-up of imbalances, such as lending booms, which unravel simultaneously across institutions and contribute to widespread market distress. In addition, banks’ balance sheets and financial contracts are interconnected, meaning a failure in one institution can quickly spread to others, amplifying systemic risk. ... Poor risk controls and a lack of enforcement also encourage excessive moral hazard and risk-taking behavior that exceed what a bank can safely manage, undermining system stability. Homogeneous risk diversification can also be costly and exacerbate systemic risk. When banks diversify risks in similar ways, individual risk reduction paradoxically increases the probability of simultaneous multiple failures. Fragmented regulation and inadequate risk frameworks fail to address these systemic vulnerabilities, since persistent weak risk management practices threaten the entire financial system. In essence, weak risk management undermines individual bank stability, while the interconnected and pro-cyclical nature of the banking system can trigger cascading failures that escalate into systemic crises.


Where Are the Big Banks Deploying AI? Simple Answer: Everywhere

Of all the banks presenting, BofA was the most explicit in describing how it is using various forms of artificial intelligence. Artificial intelligence allows the bank to effectively change the work across more areas of its operations than prior types of tech tools allowed, according to Brian Moynihan, chair and CEO. The bank included a full-page graphic among its presentation slides, the chart describing four "pillars," in Moynihan’s words, where the bank is applying AI tools. ... While many banks have tended to stop short of letting their use of GenAI touch customers directly, Synchrony has introduced a tool for its customers when they want to shop for various consumer items. It launched its pilot of Smart Search a year ago. Smart Search provides a natural language hunt joined with GenAI. It is a joint effort of the bank’s AI technology and product incubation teams. The functionality permits shoppers using Synchrony’s Marketplace to enter a phrase or theme to do with decorating and home furnishings. The AI presents shoppers with a "handpicked" selection of products matching the information entered, all of which are provided by merchant partners. ... Citizens is in the midst of its "Reimagining the Bank," Van Saun explained. This entails rethinking and redesigning how Citizens serves customers. He said Citizens is "talking with lots of outside consultants looking at scenarios across all industries across the planet in the banking industry."


How logic can help AI models tell more truth, according to AWS

By whatever name you call it, automated reasoning refers to algorithms that search for statements or assertions about the world that can be verified as true by using logic. The idea is that all knowledge is rigorously supported by what's logically able to be asserted. As Cook put it, "Reasoning takes a model and lets us talk accurately about all possible data it can produce." Cook gave a brief snippet of code as an example that demonstrates how automated reasoning achieves that rigorous validation. ... AWS has been using automated reasoning for a decade now, said Cook, to achieve real-world tasks such as guaranteeing delivery of AWS services according to SLAs, or verifying network security. Translating a problem into terms that can be logically evaluated step by step, like the code loop, is all that's needed. ... The future of automated reasoning is melding it with generative AI, a synthesis referred to as neuro-symbolic. On the most basic level, it's possible to translate from natural-language terms into formulas that can be rigorously analyzed using logic by Zelkova. In that way, Gen AI can be a way for a non-technical individual to frame their goal in informal, natural language terms, and then have automated reasoning take that and implement it rigorously. The two disciplines can be combined to give non-logicians access to formal proofs, in other words.


Can Security Culture Be Taught? AWS Says Yes

Security culture is broadly defined as an organization's shared strategies, policies, and perspectives that serve as the foundation for its enterprise security program. For many years, infosec leaders have preached the importance of a strong culture and how it cannot only strengthen the organization's security posture but also spur increases in productivity and profitability. Security culture has also been a focus in the aftermath of last year's scathing Cyber Safety Review Board (CSRB) report on Microsoft, which stemmed from an investigation into a high-profile breach of the software giant at the hands of the Chinese nation-state threat group Storm-0558. The CSRB found "Microsoft's security culture was inadequate and requires an overhaul," according to the April 2024 report. Specifically, the CSRB board members flagged an overall corporate culture at Microsoft that "deprioritized both enterprise security investments and rigorous risk management." ... But security culture goes beyond frameworks and executive structures; Herzog says leaders need to have the right philosophies and approaches to create an effective, productive environment for employees throughout the organization, not just those on the security team. ... A big reason why a security culture is hard to build, according to Herzog, is that many organizations are simply defining success incorrectly.


Data and AI Programs Are Effective When You Take Advantage of the Whole Ecosystem — The AIAG CDAO

What set the Wiki system apart was its built-in intelligence to personalize the experience based on user roles. Kashikar illustrated this with a use case: “If I’m a marketing analyst, when I click on anything like cross-sell, upsell, or new customer buying prediction, it understands I’m a marketing analyst, and it will take me to the respective system and provide me the insights that are available and accessible to my role.” This meant that marketing, engineering, or sales professionals could each have tailored access to the insights most relevant to them. Underlying the system were core principles that ensured the program’s effectiveness, says Kaahikar. This includes information, accessibility, and discoverability, and its integration with business processes to make it actionable. ... AI has become a staple in business conversations today, and Kashikar sees this growing interest as a positive sign of progress. While this widespread awareness is a good starting point, he cautions that focusing solely on models and technologies only scratches the surface, or can provide a quick win. To move from quick wins to lasting impact, Kashikar believes that data leaders must take on the role of integrators. He says, “The data leaders need to consider themselves as facilitators or connectors where they have to take a look at the entire ecosystem and how they leverage this ecosystem to create the greatest business impact which is sustainable as well.”


Designing the Future of Data Center Physical Security

Security planning is heavily shaped by the location of a data center and its proximity to critical utilities, connectivity, and supporting infrastructure. “These factors can influence the reliability and resilience of data centers – which then in turn will shift security and response protocols to ensure continuous operations,” Saraiya says. In addition, rurality, crime rate, and political stability of the region will all influence the robustness of security architecture and protocols required. “Our thirst for information is not abating,” JLL’s Farney says. “We’re doubling the amount of new information created every four years. We need data centers to house this stuff. And that's not going away.” John Gallagher, vice president at Viakoo, said all modern data centers include perimeter security, access control, video surveillance, and intrusion detection. ... “The mega-campuses being built in remote locations require more intentionally developed security systems that build on what many edge and modular deployments utilize,” Dunton says. She says remote monitoring and AI-driven analytics allow centralized oversight with minimizing on-site personnel, while compact, hardened enclosures with integrated access control, surveillance, and environmental sensors Emphasis is also placed on tamper detection, local alerting, and quick response escalation paths.


The legal minefield of hacking back

Attribution in cyberspace is incredibly complex because attackers use compromised systems, VPNs, and sophisticated obfuscation techniques. Even with high confidence, you could be wrong. Rather than operating in legal gray areas, companies need to operate under legally binding agreements that allow security researchers to test and secure systems within clearly defined parameters. That’s far more effective than trying to exploit ambiguities that may not actually exist when tested in court. ... Active defense, properly understood, involves measures taken within your own network perimeter, like enhanced monitoring, deception technologies like honeypots, and automated response systems that isolate threats. These are defensive because they operate entirely within systems you own and control. The moment you cross into someone else’s system, even to retrieve your own stolen data, you’ve entered offensive territory. It doesn’t matter if your intentions are defensive; the action itself is offensive. Retaliation goes even further. It’s about causing harm in response to an attack. This could be destroying the attacker’s infrastructure, exposing their operations, or launching counter-attacks. This is pure vigilantism and has no place in responsible cybersecurity. ... There’s also the escalation risk. That “innocent” infrastructure might belong to a government entity, a major corporation, or be considered critical infrastructure. 


What Is Data Trust and Why Does It Matter?

Data trust can be seen as data reliability in action. When you’re driving your car, you trust that its speedometer is reliable. A driver who believes his speedometer is inaccurate may alter the car’s speed to compensate unnecessarily. Similarly, analysts who lose faith in the accuracy of the data powering their models may attempt to tweak the models to adjust for anomalies that don’t exist. Maximizing the value of a company’s data is possible only if the people consuming the data trust the work done by the people developing their data products. ... Understanding the importance of data trust is the first step in implementing a program to build trust between the producers and consumers of the data products your company relies on increasingly for its success. Once you know the benefits and risks of making data trustworthy, the hard work of determining the best way to realize, measure, and maintain data trust begins. Among the goals of a data trust program are promoting the company’s privacy, security, and ethics policies, including consent management and assessing the risks of sharing data with third parties. The most crucial aspect of a data trust program is convincing knowledge workers that they can trust AI-based tools. A study released recently by Salesforce found that more than half of the global knowledge workers it surveyed don’t trust the data that’s used to train AI systems, and 56% find it difficult to extract the information they need from AI systems.


Six reasons successful leaders love questions

A modern way of saying this is that questions are data. Leaders who want to leverage this data should focus less on answering everyone’s questions themselves and more on making it easy for the people they are talking to—their employees—to access and help one another answer the questions that have the biggest impact on the company’s overall purpose. For example, part of my work with large companies is to help leaders map what questions their employees are asking one another and analyze the group dynamics in their organization. This gives leaders a way to identify critical problems and at the same time mobilize the people who need to solve them. ... The key to changing the culture of an organization is not to tell people what to do, but to make it easy for them to ask the questions that make them consider their current behavior. Only by making room for their colleagues, employees, and other stakeholders to ask their own questions and activate their own experience and insights can leaders ensure that people’s buy-in to new initiatives is an active choice, and thus something they feel committed to acting on. ... The decision to trust the process of asking and listening to other people’s questions is also a decision to think of questioning as part of a social process—something we do to better understand ourselves and the people surrounding us.