Showing posts with label meta data. Show all posts
Showing posts with label meta data. Show all posts

Daily Tech Digest - August 23, 2025


Quote for the day:

"Failure is the condiment that gives success its flavor." -- Truman Capote


Enterprise passwords becoming even easier to steal and abuse

Attackers actively target user credentials because they offer the most direct route or foothold into a targeted organization’s network. Once inside, attackers can move laterally across systems, searching for other user accounts to compromise, or they attempt to escalate their privileges and gain administrative control. This hunt for credentials extends beyond user accounts to include code repositories, where developers may have hard-coded access keys and other secrets into application source code. Attacks using valid credentials were successful 98% of the time, according to Picus Security. ... “CISOs and security teams should focus on enforcing strong, unique passwords, using MFA everywhere, managing privileged accounts rigorously and testing identity controls regularly,” Curran says. “Combined with well-tuned DLP and continuous monitoring that can detect abnormal patterns quickly, these measures can help limit the impact of stolen or cracked credentials.” Picus Security’s latest findings reveal a concerning gap between the perceived protection of security tools and their actual performance. An overall protection effectiveness score of 62% contrasts with a shockingly low 3% prevention rate for data exfiltration. “Failures in detection rule configuration, logging gaps and system integration continue to undermine visibility across security operations,” according to Picus Security.


Architecting the next decade: Enterprise architecture as a strategic force

In an age of escalating cyber threats and expanding digital footprints, security can no longer be layered on; it must be architected in from the start. With the rise of AI, IoT and even quantum computing on the horizon, the threat landscape is more dynamic than ever. Security-embedded architectures prioritize identity-first access control, continuous monitoring and zero-trust principles as baseline capabilities. ... Sustainability is no longer a side initiative; it’s becoming a first principle of enterprise architecture. As organizations face pressure from regulators, investors and customers to lower their carbon footprint, digital sustainability is gaining traction as a measurable design objective. From energy-efficient data centers to cloud optimization strategies and greener software development practices, architects are now responsible for minimizing the environmental impact of IT systems. The Green Software Foundation has emerged as a key ecosystem partner, offering measurement standards like software carbon intensity (SCI) and tooling for emissions-aware development pipelines. ... Technology leaders must now foster a culture of innovation, build interdisciplinary partnerships and enable experimentation while ensuring alignment with long-term architectural principles. They must guide the enterprise through both transformation and stability, navigating short-term pressures and long-term horizons simultaneously.


Capitalizing on Digital: Four Strategic Imperatives for Banks and Credit Unions

Modern architectures dissolve the boundary between core and digital. The digital banking solution is no longer a bolt-on to the core; the core and digital come together to form the accountholder experience. That user experience is delivered through the digital channel, but when done correctly, it’s enabled by the modern core. Among other things, the core transformation requires robust use of shared APIs, consistent data structures, and unified development teams. Leading financial institutions are coming to realize that core evaluations now must include an evaluable of their capability to enable the digital experience. Criteria like Availability, Reliability, Real-time, Speed and Security are now emerging as foundational requirements of a core to enable the digital experience. "If your core can’t keep up with your digital, you’re stuck playing catch-up forever," said Jack Henry’s Paul Wiggins, Director of Sales, Digital Engineers. ... Many institutions still operate with digital siloed in one department, while marketing, product, and operations pursue separate agendas. This leads to mismatched priorities — products that aren’t promoted effectively, campaigns that promise features operations can’t support, and technical fixes that don’t address the root cause of customer and member pain points. ... Small-business services are a case in point. Jack Henry’s Strategy Benchmark study found that 80% of CEOs plan to expand these services over the next two years. 


Bentley Systems CIO Talks Leadership Strategy and AI Adoption

The thing that’s really important for a CIO to be thinking about is that we are a microcosm for how all of the business functions are trying to execute the tactics against the strategy. What we can do across the portfolio is represent the strategy in real terms back to the business. We can say: These are all of the different places where we're thinking about investing. Does that match with the strategy we thought we were setting for ourselves? And where is there a delta and a difference? ... When I got my first CIO role, there was all of this conversation about business process. That was the part that I had to learn and figure out how to map into these broader, strategic conversations. I had my first internal IT role at Deutsche Bank, where we really talked about product model a lot -- thinking about our internal IT deliverables as products. When I moved to Lenovo, we had very rich business process and transformation conversations because we were taking the whole business through such a foundational change. I was able to put those two things together. It was a marriage of several things: running a product organization; marrying that to the classic IT way of thinking about business process; and then determining how that becomes representative to the business strategy.


What Is Active Metadata and Why Does It Matter?

Active metadata addresses the shortcomings of passive approaches by automatically updating the metadata whenever an important aspect of the information changes. Defining active metadata and understanding why it matters begins by looking at the shift in organizations’ data strategies from a focus on data acquisition to data consumption. The goal of active metadata is to promote the discoverability of information resources as they are acquired, adapted, and applied over time. ... From a data consumer’s perspective, active metadata adds depth and breadth to their perception of the data that fuels their decision-making. By highlighting connections between data elements that would otherwise be hidden, active metadata promotes logical reasoning about data assets. This is especially so when working on complex problems that involve a large number of disconnected business and technical entities.The active metadata analytics workflow orchestrates metadata management across platforms to enhance application integration, resource management, and quality monitoring. It provides a single, comprehensive snapshot of the current status of all data assets involved in business decision-making. The technology augments metadata with information gleaned from business processes and information systems. 


Godrej Enterprises CHRO on redefining digital readiness as culture, not tech

“Digital readiness at Godrej Enterprises Group is about empowering every employee to thrive in an ever-evolving landscape,” Kaur said. “It’s not just about technology adoption. It’s about building a workforce that is agile, continuously learning, and empowered to innovate.” This reframing reflects a broader trend across Indian industry, where digital transformation is no longer confined to IT departments but runs through every layer of an organisation. For Godrej Enterprises Group, this means designing a workplace where intrapreneurship is rewarded, innovation is constant, and employees are trained to think beyond immediate functions. ... “We’ve moved away from one-off training sessions to creating a dynamic ecosystem where learning is accessible, relevant, and continuous,” she said. “Learning is no longer a checkbox — it’s a shared value that energises our people every day.” This shift is underpinned by leadership development programmes and innovation platforms, ensuring that employees at every level are encouraged to experiment and share knowledge.  ... “We see digital skilling as a core business priority, not just an HR or L&D initiative,” she said. “By making digital skilling a shared responsibility, we foster a culture where learning is continuous, progress is visible, and success is celebrated across the organisation.”


AI is creeping into the Linux kernel - and official policy is needed ASAP

However, before you get too excited, he warned: "This is a great example of what LLMs are doing right now. You give it a small, well-defined task, and it goes and does it. And you notice that this patch isn't, 'Hey, LLM, go write me a driver for my new hardware.' Instead, it's very specific -- convert this specific hash to use our standard API." Levin said another AI win is that "for those of us who are not native English speakers, it also helps with writing a good commit message. It is a common issue in the kernel world where sometimes writing the commit message can be more difficult than actually writing the code change, and it definitely helps there with language barriers." ... Looking ahead, Levin suggested LLMs could be trained to become good Linux maintainer helpers: "We can teach AI about kernel-specific patterns. We show examples from our codebase of how things are done. It also means that by grounding it into our kernel code base, we can make AI explain every decision, and we can trace it to historical examples." In addition, he said the LLMs can be connected directly to the Linux kernel Git tree, so "AI can go ahead and try and learn things about the Git repo all on its own." ... This AI-enabled program automatically analyzes Linux kernel commits to determine whether they should be backported to stable kernel trees. The tool examines commit messages, code changes, and historical backporting patterns to make intelligent recommendations.


Applications and Architecture – When It’s Not Broken, Should You Try to Fix It?

No matter how reliable your application components are, they will need to be maintained, upgraded or replaced at some point. As elements in your application evolve, some will reach end of life status – for example, Redis 7.2 will reach end of life status for security updates in February 2026. Before that point, it’s necessary to assess the available options. For businesses in some sectors like financial services, running out of date and unsupported software is a potential failure for regulations on security and resilience. For example, the Payment Card Industry Data Security Standard version 4.0 enforces that teams should check all their software and hardware is supported every year; in the case of end of life software, teams must also provide a full plan for migration that will be completed within twelve months. ... For developers and software architects, understanding the role that any component plays in the overall application makes it easier to plan ahead. Even the most reliable and consistent component may need to change given outside circumstances. In the Discworld series, golems are so reliable that they become the standard for currency; at the same time, there are so many of them that any problem could affect the whole economy. When it comes to data caching, Redis has been a reliable companion for many developers. 


From cloud migration to cloud optimization

The report, based on insights from more than 2,000 IT leaders, reveals that a staggering 94% of global IT leaders struggle with cloud cost optimization. Many enterprises underestimate the complexities of managing public cloud resources and the inadvertent overspending that occurs from mismanagement, overprovisioning, or a lack of visibility into resource usage. This inefficiency goes beyond just missteps in cloud adoption. It also highlights how difficult it is to align IT cost optimization with broader business objectives. ... This growing focus sheds light on the rising importance of finops (financial operations), a practice aimed at bringing greater financial accountability to cloud spending. Adding to this complexity is the increasing adoption of artificial intelligence and automation tools. These technologies drive innovation, but they come with significant associated costs. ... The argument for greater control is not new, but it has gained renewed relevance when paired with cost optimization strategies. ... With 41% of respondents’ IT budgets still being directed to scaling cloud capabilities, it’s clear that the public cloud will remain a cornerstone of enterprise IT in the foreseeable future. Cloud services such as AI-powered automation remain integral to transformative business strategies, and public cloud infrastructure is still the preferred environment for dynamic, highly scalable workloads. Enterprises will need to make cloud deployments truly cost-effective.


The Missing Layer in AI Infrastructure: Aggregating Agentic Traffic

Software architects and engineering leaders building AI-native platforms are starting to notice familiar warning signs: sudden cost spikes on AI API bills, bots with overbroad permissions tapping into sensitive data, and a disconcerting lack of visibility or control over what these AI agents are doing. It’s a scenario reminiscent of the early days of microservices – before we had gateways and meshes to restore order – only now the "microservices" are semi-autonomous AI routines. Gartner has begun shining a spotlight on this emerging gap. ... Every major shift in software architecture eventually demands a mediation layer to restore control. When web APIs took off, API gateways became essential for managing authentication/authorization, rate limits, and policies. With microservices, service meshes emerged to govern internal traffic. Each time, the need only became clear once the pain of scale surfaced. Agentic AI is on the same path. Teams are wiring up bots and assistants that call APIs independently - great for demos ... So, what exactly is an AI Gateway? At its core, it’s a middleware component – either a proxy, service, or library – through which all AI agent requests to external services are channeled. Rather than letting each agent independently hit whatever API it wants, you route those calls via the gateway, which can then enforce policies and provide central management.



Daily Tech Digest - August 11, 2025


Quote for the day:

"Leadership is absolutely about inspiring action, but it is also about guarding against mis-action." -- Simon Sinek


Attackers Target the Foundations of Crypto: Smart Contracts

Central to the attack is a malicious smart contract, written in the Solidity programming language, with obfuscated functionality that transfers stolen funds to a hidden externally owned account (EOA), says Alex Delamotte, the senior threat researcher with SentinelOne who wrote the analysis. ... The decentralized finance (DeFi) ecosystem relies on smart contracts — as well as other technologies such as blockchains, oracles, and key management — to execute transactions, manage data on a blockchain, and allow for agreements between different parties and intermediaries. Yet their linchpin status also makes smart contracts a focus of attacks and a key component of fraud. "A single vulnerability in a smart contract can result in the irreversible loss of funds or assets," Shashank says. "In the DeFi space, even minor mistakes can have catastrophic financial consequences. However, the danger doesn’t stop at monetary losses — reputational damage can be equally, if not more, damaging." ... Companies should take stock of all smart contracts by maintaining a detailed and up-to-date record of all deployed smart contracts, verifying every contract, and conducting periodic audits. Real-time monitoring of smart contracts and transactions can detect anomalies and provide fast response to any potential attack, says CredShields' Shashank.


Is AI the end of IT as we know it?

CIOs have always been challenged by the time, skills, and complexities involved in running IT operations. Cloud computing, low-code development platforms, and many DevOps practices helped IT teams move “up stack,” away from the ones and zeros, to higher-level tasks. Now the question is whether AI will free CIOs and IT to focus more on where AI can deliver business value, instead of developing and supporting the underlying technologies. ... Joe Puglisi, growth strategist and fractional CIO at 10xnewco, offered this pragmatic advice: “I think back to the days when you wrote an assembly and it took a lot of time. We introduced compilers, higher-level languages, and now we have AI that can write code. This is a natural progression of capabilities and not the end of programming.” The paradigm shift suggests CIOs will have to revisit their software development lifecycles for significant shifts in skills, practices, and tools. “AI won’t replace agile or DevOps — it’ll supercharge them with standups becoming data-driven, CI/CD pipelines self-optimizing, and QA leaning on AI for test creation and coverage,” says Dominik Angerer, CEO of Storyblok. “Developers shift from coding to curating, business users will describe ideas in natural language, and AI will build functional prototypes instantly. This democratization of development brings more voices into the software process while pushing IT to focus on oversight, scalability, and compliance.”


From Indicators to Insights: Automating Risk Amplification to Strengthen Security Posture

Security analysts don’t want more alerts. They want more relevant ones. Traditional SIEMs generate events using their own internal language that involve things like MITRE tags, rule names and severity scores. But what frontline responders really want to know is which users, systems, or cloud resources are most at risk right now. That’s why contextual risk modeling matters. Instead of alerting on abstract events, modern detection should aggregate risk around assets including users, endpoints, APIs, or services. This shifts the SOC conversation from “What alert fired?” to “Which assets should I care about today?” ... The burden of alert fatigue isn’t just operational but also emotional. Analysts spend hours chasing shadows, pivoting across tools, chasing one-off indicators that lead nowhere. When everything is an anomaly, nothing is actionable. Risk amplification offers a way to reduce the unseen yet heavy weight on security analysts and the emotional toll it can take by aligning high-risk signals to high-value assets and surfacing insights only when multiple forms of evidence converge. Rather than relying on a single failed login or endpoint alert, analysts can correlate chains of activity whether they be login anomalies, suspicious API queries, lateral movement, or outbound data flows – all of which together paint a much stronger picture of risk.


The Immune System of Software: Can Biology Illuminate Testing?

In software engineering, quality assurance is often framed as identifying bugs, validating outputs, and confirming expected behaviour. But similar to immunology, software testing is much more than verification. It is the process of defining the boundaries of the system, training it to resist failure, and learning from its past weaknesses. Like the immune system, software testing should be multi-layered, adaptive, and capable of evolving over time. ... Just as innate immunity is present from biological birth, unit tests should be present from the birth of our code. Just as innate immunity doesn't need a full diagnostic history to act, unit tests don’t require a full system context. They work in isolation, making them highly efficient. But they also have limits: they can't catch integration issues or logic bugs that emerge from component interactions. That role belongs to more evolved layers. ... Negative testing isn’t about proving what a system can do — it’s about ensuring the system doesn’t do what it must never do. It verifies how the software behaves when exposed to invalid input, unauthorized access, or unexpected data structures. It asks: Does the system fail gracefully? Does it reject the bad while still functioning with the good? Just as an autoimmune disease results from a misrecognition of the self, software bugs often arise when we misrecognise what our code should do and what it should not do.


CSO hiring on the rise: How to land a top security exec role

“Boards want leaders who can manage risk and reputation, which has made soft skills — such as media handling, crisis communication, and board or financial fluency — nearly as critical as technical depth,” Breckenridge explains. ... “Organizations are seeking cybersecurity leaders who combine technical depth, AI fluency, and strong interpersonal skills,” Fuller says. “AI literacy is now a baseline expectation, as CISOs must understand how to defend against AI-driven threats and manage governance frameworks.” ... Offers of top pay and authority to CSO candidates obviously come with high expectations. Organizations are looking for CSOs with a strong blend of technical expertise, business acumen, and interpersonal strength, Fuller says. Key skills include cloud security, identity and access management (IAM), AI governance, and incident response planning. Beyond technical skills, “power skills” such as communication, creativity, and problem-solving are increasingly valued, Fuller explains. “The ability to translate complex risks into business language and influence board-level decisions is a major differentiator. Traits such as resilience, adaptability, and ethical leadership are essential — not only for managing crises but also for building trust and fostering a culture of security across the enterprise,” he says.


From legacy to SaaS: Why complexity is the enemy of enterprise security

By modernizing, i.e., moving applications to a more SaaS-like consumption model, the network perimeter and associated on-prem complexity tends to dissipate, which is actually a good thing, as it makes ZTNA easier to implement. As the main entry point into an organization’s IT system becomes the web application URL (and browser), this reduces attackers’ opportunities and forces them to focus on the identity layer, subverting authentication, phishing, etc. Of course, a higher degree of trust has to be placed (and tolerated) in SaaS providers, but at least we now have clear guidance on what to look for when transitioning to SaaS and cloud: identity protection, MFA, and phishing-resistant authentication mechanisms become critical—and these are often enforced by default or at least much easier to implement compared to traditional systems. ... The unwillingness to simplify technology stack by moving to SaaS is then combined with a reluctant and forced move to the cloud for some applications, usually dictated by business priorities or even ransomware attacks (as in the BL case above). This is a toxic mix which increases complexity and reduces the ability for a resource-constrained organization to keep security risks at bay.


Why Metadata Is the New Interface Between IT and AI

A looming risk in enterprise AI today is using the wrong data or proprietary data in AI data pipelines. This may include feeding internal drafts to a public chatbot, training models on outdated or duplicate data, or using sensitive files containing employee, customer, financial or IP data. The implications range from wasted resources to data breaches and reputational damage. A comprehensive metadata management strategy for unstructured data can mitigate these risks by acting as a gatekeeper for AI workflows. For example, if a company wants to train a model to answer customer questions in a chatbot, metadata can be used to exclude internal files, non-final versions, or documents marked as confidential. Only the vetted, tagged, and appropriate content is passed through for embedding and inference. This is a more intelligent, nuanced approach than simply dumping all available files into an AI pipeline. With rich metadata in place, organizations can filter, sort, and segment data based on business requirements, project scope, or risk level. Metadata augments vector labeling for AI inferencing. A metadata management system helps users discover which files to feed the AI tool, such as health benefits documents in an HR chatbot while vector labeling gives deeper information as to what’s in each document.


Ask a Data Ethicist: What Should You Know About De-Identifying Data?

Simply put, data de-identification is removing or obscuring details from a dataset in order to preserve privacy. We can think about de-identification as existing on a continuum... Pseudonymization is the application of different techniques to obscure the information, but allows it to be accessed when another piece of information (key) is applied. In the above example, the identity number might unlock the full details – Joe Blogs of 123 Meadow Drive, Moab UT. Pseudonymization retains the utility of the data while affording a certain level of privacy. It should be noted that while the terms anonymize or anonymization are widely used – including in regulations – some feel it is not really possible to fully anonymize data, as there is always a non-zero chance of reidentification. Yet, taking reasonable steps on the de-identification continuum is an important part of compliance with requirements that call for the protection of personal data. There are many different articles and resources that discuss a wide variety of types of de-identification techniques and the merits of various approaches ranging from simple masking techniques to more sophisticated types of encryption. The objective is to strike a balance between the complexity of the the technique to ensure sufficient protection, while not being burdensome to implement and maintain.


5 ways business leaders can transform workplace culture - and it starts by listening

Antony Hausdoerfer, group CIO at auto breakdown specialist The AA, said effective leaders recognize that other people will challenge established ways of working. Hearing these opinions comes with an open management approach. "You need to ensure that you're humble in listening, but then able to make decisions, commit, and act," he said. "Effective listening is about managing with humility with commitment, and that's something we've been very focused on recently." Hausdoerfer told ZDNET how that process works in his IT organization. "I don't know the answer to everything," he said. "In fact, I don't know the answer to many things, but my team does, and by listening to them, we'll probably get the best outcome. Then we commit to act." ... Bev White, CEO at technology and talent solutions provider Nash Squared, said open ears are a key attribute for successful executives. "There are times to speak and times to listen -- good leaders recognize which is which," she said. "The more you listen, the more you will understand how people are really thinking and feeling -- and with so many great people in any business, you're also sure to pick up new information, deepen your understanding of certain issues, and gain key insights you need."


Beyond Efficiency: AI's role in reshaping work and reimagining impact

The workplace of the future is not about humans versus machines; it's about humans working alongside machines. AI's real value lies in augmentation: enabling people to do more, do better, and do what truly matters. Take recruitment, for example. Traditionally time-intensive and often vulnerable to unconscious bias, hiring is being reimagined through AI. Today, organisations can deploy AI to analyse vast talent pools, match skills to roles with precision, and screen candidates based on objective data. This not only reduces time-to-hire but also supports inclusive hiring practices by mitigating biases in decision-making. In fact, across the employee lifecycle, it personalises experiences at scale. From career development tools that recommend roles and learning paths aligned with individual aspirations, to chatbots that provide real-time HR support, AI makes the employee journey more intuitive, proactive, and empowering. ... AI is not without its challenges. As with any transformative technology, its success hinges on responsible deployment. This includes robust governance, transparency, and a commitment to fairness and inclusion. Diversity must be built into the AI lifecycle, from the data it's trained on to the algorithms that guide its decisions. 

Daily Tech Digest - September 04, 2024

What is HTTP/3? The next-generation web protocol

HTTPS will still be used as a mechanism for establishing secure connections, but traffic will be encrypted at the HTTP/3 level. Another way to say it is that TLS will be integrated into the network protocol instead of working alongside it. So, encryption will be moved into the transport layer and out of the app layer. This means more security by default—even the headers in HTTP/3 are encrypted—but there is a corresponding cost in CPU load. Overall, the idea is that communication will be faster due to improvements in how encryption is negotiated, and it will be simpler because it will be built-in at a lower level, avoiding the problems that arise from a diversity of implementations. ... In TCP, that continuity isn’t possible because the protocol only understands the IP address and port number. If either of those changes—as when you walk from one network to another while holding a mobile device—an entirely new connection must be established. This reconnection leads to a predictable performance degradation. The QUIC protocol introduces connection IDs or CIDs. For security, these are actually CID sets negotiated by the server and client. 


6 things hackers know that they don’t want security pros to know that they know

It’s not a coincidence that many attacks happen at the most challenging of times. Hackers really do increase their attacks on weekends and holidays when security teams are lean. And they’re more likely to strike right before lunchtime and end-of-day, when workers are rushing and consequently less attentive to red flags indicating a phishing attack or fraudulent activity. “Hackers typically deploy their attacks during those times because they’re less likely to be noticed,” says Melissa DeOrio, global threat intelligence lead at S-RM, a global intelligence and cybersecurity consultancy. ... Threat actors actively engage in open-source intelligence (OSINT) gathering, looking for information they can use to devise attacks, Carruthers says. It’s not surprising that hackers look for news about transformative events such as big layoffs, mergers and the like, she says. But CISOs, their teams and other executives may be surprised to learn that hackers also look for news about seemingly innocuous events such as technology implementations, new partnerships, hiring sprees, and executive schedules that could reveal when they’re out of the office.


Take the ‘Shift Left’ Approach a Step Further by ‘Starting Left’

This makes it vital to guarantee code quality and security from the start so that nothing slips through the cracks. Shift left accounts for this. It minimizes risks of bugs and vulnerabilities by introducing code testing and analysis earlier in the SLDC, catching problems before they mount and become trickier to solve or even find. Advancing testing activities earlier puts DevOps teams in a position to deliver superior-quality software to customers with greater frequency. As a practice, “shift left” requires a lot more vigilance in today’s security landscape. But most development teams don’t have the mental (or physical) bandwidth to do it properly — even though it should be an intrinsic part of code development strategy. In fact, the Linux Foundation revealed in a study recently that almost one-third of developers aren’t familiar with secure software development practices. “Shifting left” — performing analysis and code reviews earlier in the development process — is a popular mindset for creating better software. What the mindset should be, though, is to “start left,” not just impose the burden later on in the SDLC for developers. ... This mindset of “start left” focuses not only on an approach that values testing early and often, but also on using the best tools to do so. 


ONCD Unveils BGP Security Road Map Amid Rising Threats

The guidance comes amid an intensified threat landscape for BGP, which serves as the backbone of global internet traffic routing. BGP is a foundational yet vulnerable protocol, developed at a time when many of today's cybersecurity risks did not exist. Coker said the ONCD is committed to covering at least 60% of the federal government's IP space by registration service agreements "by the end of this calendar year." His office recently led an effort to develop a federal RSA template that federal agencies can use to facilitate their adoption of Resource Public Key Infrastructure, which can be used to mitigate BGP vulnerabilities. ... The ONCD report underscores how BGP "does not provide adequate security and resilience features" and lacks critical security capabilities, including the ability to validate the authority of remote networks to originate route announcements and to ensure the authenticity and integrity of routing information. The guidance tasks network operators with developing and periodically updating cybersecurity risk management plans that explicitly address internet routing security and resilience. It also instructs operators to identify all information systems and services internal to the organization that require internet access and assess the criticality of maintaining those routes for each address.


Efficient DevSecOps Workflows With a Little Help From AI

When it comes to software development, AI offers lots of possibilities to enhance workflows at every stage—from splitting teams into specialized roles such as development, operations, and security to facilitating typical steps like planning, managing, coding, testing, documentation, and review. AI-powered code suggestions and generation capabilities can automate tasks like autocompletion and identification of missing dependencies, making coding more efficient. Additionally, AI can provide code explanations, summarizing algorithms, suggesting performance improvements, and refactoring long code into object-oriented patterns or different languages. ... Instead of manually sifting through job logs, AI can analyze them and provide actionable insights, even suggesting fixes. By refining prompts and engaging in conversations with the AI, developers can quickly diagnose and resolve issues, even receiving tips for optimization. Security is crucial, so sensitive data like passwords and credentials must be filtered before analysis. A well-crafted prompt can instruct the AI to explain the root cause in a way any software engineer can understand, accelerating troubleshooting. This approach can significantly improve developer efficiency.


PricewaterhouseCoopers’ new CAIO – workers need to know their role with AI

“AI is becoming a natural part of everything we make and do. We’re moving past the AI exploration cycle, where managing AI is no longer just about tech, it is about helping companies solve big, important and meaningful problems that also drive a lot of economic value. “But the only way we can get there is by bringing AI into an organization’s business strategy, capability systems, products and services, ways of working and through your people. AI is more than just a tool — it can be viewed as a member of the team, embedding into the end-to-end value chain. The more AI becomes naturally embedded and intrinsic to an organization, the more it will help both the workforce and business be more productive and deliver better value. “In addition, we will see new products and services that are fully AI-powered come into the market — and those are going to be key drivers of revenue and growth.” ... You need to consider the bigger picture, understanding how AI is becoming integrated in all aspects of your organization. That means having your RAI leader working closely with your company’s CAIO (or equivalent) to understand changes in your operating model, business processes, products and services.


What Is Active Metadata and Why Does It Matter?

Active metadata’s ability to update automatically whenever the data it describes changes now extends beyond the data profile itself to enhance the management of data access, classification, and quality. Passive metadata’s static nature limits its use to data discovery, but the dynamic nature of active metadata delivers real-time insights into the data’s lineage to help automate data governance: Get a 360-degree view of data - Active metadata’s ability to auto-update ensures that metadata delivers complete and up-to-date descriptions of the data’s lineage, context, and quality. Companies can tell at a glance whether the data is being used effectively, appropriately, and in compliance with applicable regulations. Monitor data quality in real time - Automatic metadata updates improve data quality management by providing up-to-the-minute metrics on data completeness, accuracy, and consistency. This allows organizations to identify and respond to potential data problems before they affect the business. Patch potential governance holes - Active metadata allows data governance rules to be enforced automatically to safeguard access to the data, ensure it’s appropriately classified, and confirm it meets all data retention requirements. 


How to Get IT and Security Teams to Work Together Effectively

Successful collaboration requires a sense of shared mission, Preuss says. Transparency is crucial. "Leverage technology and automation to effectively share information and challenges across both teams," she advises. Building and practicing trust and communication in an environment that's outside the norm is also essential. One way to do so is by conducting joint business resilience drills. "Whether a cyber war game or an environmental crisis [exercise], resilience drills are one way to test the collaboration between teams before an event occurs." ... When it comes to cross-team collaboration, Scott says it's important for members to understand their communication style as well as the communication styles of the people they work with. "At Immuta, we do this through a DiSC assessment, which each employee is invited to complete upon joining the company." To build an overall sense of cooperation and teamwork, Jeff Orr, director of research, digital technology at technology research and advisory firm ISG, suggests launching an exercise simulation in which both teams are required to collaborate in order to succeed. 


Protecting national interests: Balancing cybersecurity and operational realities

A significant challenge we face today is safeguarding the information space against misinformation, disinformation, manipulation and deceptive content. Whether this is at the behest of nation-states, or their supporters, it can be immensely destabilising and disruptive. We must find a way to tackle this challenge, but this should not just focus on the responsibilities held by social media platforms, but also on how we can detect targeted misinformation, counter those narratives and block the sources. Technology companies have a key role in taking down content that is obviously malicious, but we need the processes to respond in hours, rather than days and weeks. More generally, infrastructure used to launch attacks can be spun up more quickly than ever and attacks manifest at speed. This requires the government to work more closely with major technology and telecommunication providers so we can block and counter these threats – and that demands information sharing mechanisms and legal frameworks which enable this. Investigating and countering modern transnational cybercrime demands very different approaches, and of course AI will undoubtedly play a big part in this, but sadly both in attack and defence.


How leading CIOs cultivate business-centric IT

With digital strategy and technology as the brains behind most business functions and operating models, IT organizations are determined to inject more business-centricity into their employee DNA. IT leaders have been burnishing their business acumen and embracing a non-technical remit for some time. Now, there’s a growing desire to infuse that mentality throughout the greater IT organization, stretching beyond basic business-IT alignment to creating a collaborative force hyper-fixated on channeling innovation to advance enterprise business goals. “IT is no longer the group in the rear with the gear,” says Sabina Ewing, senior vice president of business and technology services and CIO at Abbott Laboratories. ... While those with robust experience and expertise in highly technical areas such as cloud architecture or cybersecurity are still highly coveted, IT organizations like Duke Health, ServiceNow, and others are also seeking a very different type of persona. Zoetis, a leading animal health care company, casts a wider net when seeking tech and digital talent, focusing on those who are collaborative, passionate about making a difference, and adaptable to change. Candidates should also have a strong understanding of technology application, says CIO Keith Sarbaugh.



Quote for the day:

''When someone tells me no, it doesn't mean I can't do it, it simply means I can't do it with them.'' -- Karen E. Quinones Miller

Daily Tech Digest - March 07, 2024

3 Key Metrics to Measure Developer Productivity

The team dimension considers business outcomes in a wider organizational context. While software development teams must work efficiently together, they must also work with teams across other business units. Often, non-technical factors, such as peer support, working environment, psychological safety and job enthusiasm play a significant role in boosting productivity. Another framework is SPACE, which is an acronym for satisfaction, performance, activity, communication and efficiency. SPACE was developed to capture some of the more nuanced and human-centered dimensions of productivity. SPACE metrics, in combination with DORA metrics, can fill in the productivity measurement gaps by correlating productivity metrics to business outcomes. McKinsey found that combining DORA and SPACE metrics with “opportunity-focused” metrics can produce a well-rounded view of developer productivity. That, in turn, can lead to positive outcomes, as McKinsey reports: 20% to 30% reduction in customer-reported product defects, 20% improvement in employee experience scores and 60% improvement in customer satisfaction ratings.


Metadata Governance: Crucial to Managing IoT

Governance of metadata requires formalization and agreement among stakeholders, based on existing Data Governance processes and activities. Through this program, business stakeholders engage in conversations to agree on what the data is and its context, generating standards around organizational metadata. The organization sees the results in a Business Glossary or data catalog. In addition to Data Governance tools, IT tools significantly contribute to metadata generation and usage, tracking updates, and collecting data. These applications, often equipped with machine learning capabilities, automate the gathering, processing, and delivery of metadata to identify patterns within the data without the need for manual intervention. ... The need for metadata governance services will emerge through establishing and maintaining this metadata management program. By setting up and running these services, an organization can better utilize Data Governance capabilities to collect, select, and edit metadata. Developing these processes requires time and effort, as metadata governance needs to adapt to the organization’s changing needs. 


CISOs Tackle Compliance With Cyber Guidelines

Operationally, CISOs will need to become increasingly involved with the organization as a whole -- not just the IT and security teams -- to understand the company’s overall security dynamics. “This is a much more resource-intensive process, but necessary until companies find sustainable footing in the new regulatory landscape,” Tom Kennedy, vice president of Axonius Federal Systems, explains via email. He points to the SEC disclosure mandate, which requires registrants to disclose “material cybersecurity incidents”, as a great example of how private companies are struggling to comply. From his perspective, the root problem is a lack of clarity within the mandate of what constitutes a “material” breach, and where the minimum bar should be set when it comes to a company’s security posture. “As a result, we’ve seen a large variety in companies’ recent cyber incident disclosures, including both the frequency, level of detail, and even timing,” he says. ... “The first step in fortifying your security posture is knowing what your full attack surface is -- you cannot protect what you don’t know about,” Kennedy says. “CISOs and their teams must be aware of all systems in their network -- both benign and active -- understand how they work together, what vulnerabilities they may have.”


AISecOps: Expanding DevSecOps to Secure AI and ML

AISecOps, the application of DevSecOps principles to AI/ML and generative AI, means integrating security into the life cycle of these models—from design and training to deployment and monitoring. Continuous security practices, such as real-time vulnerability scanning and automated threat detection, protection measures for the data and model repositories, are essential to safeguarding against evolving threats. One of the core tenets of DevSecOps is fostering a culture of collaboration between development, security and operations teams. This multidisciplinary approach is even more critical in the context of AISecOps, where developers, data scientists, AI researchers and cybersecurity professionals must work together to identify and mitigate risks. Collaboration and open communication channels can accelerate the identification of vulnerabilities and the implementation of fixes. Data is the lifeblood of AI and ML models. Ensuring the integrity and confidentiality of the data used for training and inference is paramount. ... Embedding security considerations from the outset is a principle that translates directly from DevSecOps to AI and ML development.


Translating Generative AI investments into tangible outcomes

Integration of Generative AI presents exciting opportunities for businesses, but it also comes with its fair share of risks. One significant concern revolves around data privacy and security. Generative AI systems often require access to vast amounts of sensitive data, raising concerns about potential breaches and unauthorised access. Moreover, there’s the challenge of ensuring the reliability and accuracy of generated outputs, as errors or inaccuracies could lead to costly consequences or damage to the brand’s reputation. Lastly, there’s the risk of over-reliance on AI-generated content, potentially diminishing human creativity and innovation within the organisation. Navigating these risks requires careful planning, robust security measures, and ongoing monitoring to ensure the responsible and effective integration of Generative AI into business operations. Consider a healthcare organisation that implements Generative AI for medical diagnosis assistance. In this scenario, the AI system requires access to sensitive patient data, including medical records, diagnostic tests, and personal information. 


Beyond the table stakes: CISO Ian Schneller on cybersecurity’s evolving role

Schneller encourages his audience to consider the gap between the demand for cyber talent and the supply of it. “Read any kind of public press,” he says, “and though the numbers may differ a bit, they’re consistent in that there are many tens, if not hundreds of thousands of open cyber positions.” In February of last year, according to Statista, about 750,000 cyber positions were open in the US alone. According to the World Economic Forum, the global number is about 3.5 million, and according to Cybercrime magazine, the disparity is expected to persist through at least 2025. As Schneller points out, this means companies will struggle to attract cyber talent, and they will have to seek it in non-traditional places. There are many tactics for attracting security talent—aligning pay to what matters, ensuring that you have clear paths for advancing careers—but all this sums to a broader point that Schneller emphasizes: branding. Your organization must convey that it takes cybersecurity seriously, that it will provide cybersecurity talent a culture in which they can solve challenging problems, advance their careers, and earn respect, contributing to the success of the business. 


Quantum Computing Demystified – Part 2

Quantum computing’s potential to invalidate current cryptographic standards necessitates a paradigm shift towards the development of quantum-resistant encryption methods, safeguarding digital infrastructures against future quantum threats. This scenario underscores the urgency in fortifying cybersecurity frameworks to withstand the capabilities of quantum algorithms. For decision-makers and policymakers, the quantum computing era presents a dual-edged sword of strategic opportunities and challenges. The imperative to embrace this nascent technology is twofold, requiring substantial investment in research, development, and education to cultivate a quantum-literate workforce. ... Bridging the quantum expertise gap through education and training is vital for fostering a skilled workforce capable of driving quantum innovation forward. Moreover, ethical and regulatory frameworks must evolve in tandem with quantum advancements to ensure equitable access and prevent misuse, thereby safeguarding societal and economic interests.


The Comprehensive Evolution Of DevSecOps In Modern Software Ecosystems

The potential for enhanced efficiency and accuracy in identifying and addressing security vulnerabilities is enormous, even though this improvement is not without its challenges, which include the possibility of algorithmic errors and shifts in job duties. Using tools that are powered by artificial intelligence, teams can prevent security breaches, perform code analysis more efficiently and automate mundane operations. This frees up human resources to be used for tackling more complicated and innovative problems. ... When using traditional software development approaches, security checks were frequently carried out at a later stage in the development cycle, which resulted in patches that were both expensive and time-consuming. The DevSecOps methodology takes a shift-left strategy, which integrates security at the beginning of the development process. This brings security to the forefront of the process. By incorporating security into the design and development phases from the beginning, this proactive technique not only decreases the likelihood of vulnerabilities being discovered after they have already been discovered, but it also speeds up the development process.


How Generative AI and Data Management Can Augment Human Interaction with Data

In contrast with ETL processes, logical data management solutions enable real-time connections to disparate data sources without physically replicating any data. This is accomplished with data virtualization, a data integration method that establishes a virtual abstraction layer between data consumers and data sources. With this architecture, logical data management solutions enable organizations to implement flexible data fabrics above their disparate data sources, regardless of whether they are legacy or modern; structured, semistructured, or unstructured; cloud or on-premises; local or overseas; or static or streaming. The result is a data fabric that seamlessly unifies these data sources so data consumers can use the data without knowing the details about where and how it is stored. In the case of generative AI, where an LLM is the “consumer,” the LLM can simply leverage the available data, regardless of its storage characteristics, so the model can do its job. Another advantage of a data fabric is that because the data is universally accessible, it can also be universally governed and secured. 


Developers don’t need performance reviews

Software development is commonly called a “team sport.” Assessing individual contributions in isolation can breed unhealthy competition, undermine teamwork, and incentivize behavior that, while technically hitting the mark, can be detrimental to good coding and good software. The pressure of performance evaluations can deter developers from innovative pursuits, pushing them towards safer paths. And developers shouldn’t be steering towards safer paths. The development environment is rapidly changing, and developers should be encouraged to experiment, try new things, and seek out innovative solutions. Worrying about hitting specific metrics squelches the impulse to try something new. Finally, a one-size-fits-all approach to performance reviews doesn’t take into account the unique nature of software development. Using the same system to evaluate developers and members of the marketing team won’t capture the unique skills found among developers. Some software developers thrive fixing bugs. Others love writing greenfield code. Some are fast but less accurate. Others are slower but highly accurate.



Quote for the day:

''Perseverance is failing nineteen times and succeeding the twentieth.'' -- Julie Andrews

Daily Tech Digest - May 14, 2023

How to Balance Data Governance with Data Democracy

Data democratization is important to an organization because it ensures an effective and efficient method of providing all users, regardless of technical expertise, the ability to analyze readily accessible and reliable data to influence data-driven decisions and drive real-time insights. This eliminates the frustration of requesting access, sorting information, or reaching out to IT for help. ... The solution to this problem lies in data federation, which makes data from multiple sources accessible under a uniform data model. This model acts as a "single point of access" such that organizations create a virtual database where data can be accessed where it already lives. This makes it easier for organizations to query data from different sources in one place. With a single point of access, users can go to one location for searching, finding, and accessing every piece of data your organization has. This will make it easier to democratize data access because you won’t need to facilitate access across many different sources.


Will ChatGPT and Generative AI “Replace” Testing?

It stands to reason, then, that ChatGPT and generative AI will not "replace" testing or remove the need to invest in QA. Instead, like test execution automation before it, generative AI will provide a useful tool for moving faster. Yet, there will always be a need for more work, and at least a constant (if not greater) need for human input. Testers' time might be applied less to repetitive tasks like scripting, but new processes will fill the void. Meanwhile, the creativity and critical thinking offered by testers will not diminish in value as these repetitive processes are automated; such creativity should be given greater freedom. At the same time, your testers will have vital insight into how generative AI should be used in your organization. Nothing is adopted overnight, and identifying the optimal applications of tools like ChatGPT will be an ongoing conversation, just as the testing community has continually explored and improved practices for getting the most out of test automation frameworks. Lastly, as the volume of possible test scenarios grows, automation and AI will need a human steer in knowing where to target its efforts, even as we can increasingly use data to target test generation.


How agtech is poised to transform India into a farming powerhouse

Collaboration will be crucial. While agtechs might facilitate better decision making and replace manual farming practices like spraying, reducing dependence on retailers and mandis, incumbents remain important in the new ecosystem for R&D and the supply of chemicals and fertilizers. There are successful platforms already emerging that offer farmers an umbrella of products and services to address multiple, critical pain points. These one-stop shop agri-ecosystems are also creating a physical backbone/supply chain—which makes it easier for incumbents and start-ups to access the fragmented farmer base. Agtechs have a unique opportunity to become ideal partners for companies seeking market access. In this scenario, existing agriculture companies are creating value for the farmer by having more efficient and cost-effective access to the farmer versus traditional manpower-intensive setups. It’s a system that builds: the more agtechs know the farmer, the better products they can develop. India’s farms have been putting food on the table for India and the world for decades. 


How A Non Data Science Person Can Work Effectively With A Data Scientist

Effective communication is essential for a successful partnership. The data scientist should communicate technical procedures and conclusions in a clear and concise manner. In contrast, the non-data science person should communicate business requirements and limitations. Both sides can collaborate successfully by developing a clear understanding of the project objectives and the data science methodologies. Setting expectations and establishing the project’s scope from the beginning is equally critical. The non-data scientist should specify what they expect from the data scientist, including the results they intend to achieve and the project’s schedule. In return, they should describe their areas of strength and the achievable goals that fall within the project’s parameters. It is crucial to keep the lines of communication open and transparent throughout the process. Regular meetings and status reports should be organized to keep everyone informed of the project’s progress and to identify any potential issues.


Why Metadata Is a Critical Asset for Storage and IT Managers

Advanced metadata is handled differently by file storage and object storage environments. File storage organizes data in directory hierarchies, which means you can’t easily add custom metadata attributes. ... Metadata is massive because the volume and variety of unstructured data – files and objects – are massive and difficult to wrangle. Data is spread across on-premises and edge data centers and clouds and stored in potentially many different systems. To leverage metadata, you first need a process and tools for managing data. Managing metadata requires both strategy and automation; choosing the best path forward can be difficult when business needs are constantly changing and data types may also be morphing from the collection of new data types such as IoT data, surveillance data, geospatial data, and instrument data. Managing metadata as it grows can also be problematic. Can you have too much? One risk is a decrease in file storage performance. Organizations must consider how to mitigate this; one large enterprise we know switched from tagging metadata at the file level to the directory level.


Understand the 3 major approaches to data migration

Application data migration—sometimes called logical data migration or transaction-level migration—is a migration approach that utilizes the data mobility capabilities built natively into the application workload itself. ... Technique: Some applications offer proprietary data mobility features. These capabilities usually facilitate or assist with configuring backups or secondary storage. These applications then synchronously or asynchronously ensure that the secondary storage is valid and, when necessary, can be used without the primary copy. ... Block-level data migration is performed at the storage volume level. Block-level migrations are not strictly concerned about the actual data stored within the storage volume. Rather, they include file system data of any kind, partitions of any kind, raw block storage, and data from any applications. Technique: Block-level migration tools synchronize one storage volume to another storage volume from the beginning of the volume (byte 0) to the end of the entire volume (byte N) without processing any data content.


Open Source MongoDB Alternative FerretDB Now Generally Available

FerretDB works as a proxy that translates MongoDB wire protocol queries to SQL, with PostgreSQL as the database backend. Started as an open-source alternative to MongoDB, FerretDB provides the same MongoDB APIs without developers needing to learn a new language or command. Peter Farkas, co-founder and CEO of FerretDB, explains: We are creating a new standard for document databases with MongoDB compatibility. FerretDB is a drop-in replacement for MongoDB, but it also aims to set a new standard that not only brings easy-to-use document databases back to its open-source roots but also enables different database engines to run document database workloads using a standardized interface. While FerretDB is built on PostgreSQL, the database is designed with a pluggable architecture to support other backends, with projects for Tigris, SAP HANA, and SQLite currently in the working. Written in Go, the project was originally started as the Server Side Public License (SSPL) that MongoDB adopted in 2018 does not meet all criteria for open-source software set by the Open Source Initiative.


Wardley Mapping and Strategy for Software Developers

This is a more engineering-focused way to look at a business and isn’t dependent on stories, aphorisms or strange MBA terms. A few people have asked me personally whether this method really works. But it isn’t a “method” as such; just a way to agree on the environment that may otherwise be left unchallenged. Jennifer Riggins has already covered the background to Wardley mapping in detail, so I only need to summarize what we need to become aware of. ... So how do you map your own projects? One good start is simply to get your team together and see if they can map just the build process — with a build as the final product (the cup of tea). For example; starting from an agreed story, through to a change in the code in the repository, to a checkout into a staging build, to deployment. See if everyone even agrees what this looks like. The result should eventually be a common understanding. There are plenty of introductions to mapping, but the important thing is to recognize that you can represent a business in a fairly straightforward way. 


The Leader's Role in Building Independent Thinkers: How to Equip Your Team for Success

Striving for perfection can often lead to "analysis paralysis," hindering progress and preventing team members from taking action. To encourage independent thinking, leaders must prioritize action over perfection. By creating a culture of experimentation and iteration, employees learn from their mistakes, build confidence, and become less afraid of failure. ... Standing firmly behind your values and vision is a powerful way for leaders to generate independent thinking in their teams. When team members see their leader living by strong values and embodying a clear vision, they feel empowered to follow their example. This approach cultivates an environment of trust and confidence, enabling your employees to think critically and independently. ... It is essential for leaders to avoid merely delegating tasks and stepping back. Instead, actively participate in the work alongside your team, providing guidance and offering support when needed. This approach instills a sense of collaboration and helps your team feel part of something bigger. 


The Great Resignation takes a dark turn: Is it now the Great Layoff? Expert weighs in

The main challenges that Gen-Z employees face in the event of a layoff are a lack of savings, a lack of job experience, and a lack of job security. Many Generation Z workers are just starting out in their careers and haven't had time to save. Many people may have little or no savings in case of a financial emergency, such as job loss. Because Generation Z is so young, they have yet to have the opportunity to gain the experience that their elders have. If they are laid off, they are concerned that they will not have the necessary experience to re-enter the workforce. Finally, even if Gen Z workers are employed, they may believe their job is in jeopardy due to the pandemic's impact on their industry. They may be concerned that their employer will lay off employees or that their position will become obsolete as the company adapts to the changing business environment. Because of these challenges and ongoing economic uncertainty, Generation Z remains concerned about the possibility of layoffs. 



Quote for the day:

"Innovation distinguishes between a leader and a follower." -- Steve Jobs

Daily Tech Digest - November 25, 2022

Ripe For Disruption: Artificial Intelligence Advances Deeper Into Healthcare

The challenges and changes needed to advance AI go well beyond technology considerations. “With data and AI entering in healthcare, we are dealing with an in-depth cultural change, that will not happen overnight,” according to Pierron-Perlès at her co-authors. “Many organizations are developing their own acculturation initiatives to develop the data and AI literacy of their resources in formats that are appealing. AI goes far beyond technical considerations.” There has been great concern about too much AI de-humanizing healthcare. But, once carefully considered and planned, may prove to augment human care. “People, including providers, imagine AI will be cold and calculating without consideration for patients,” says Garg. “Actually, AI-powered automation for healthcare operations frees clinicians and others from the menial, manual tasks that prevent them from focusing all their attention on patient care. While other AI-based products can predict events, the most impactful are incorporated into workflows in order to resolve issues and drive action by frontline users.”


Extinguishing IT Team Burnout Through Mindfulness and Unstructured Time

Mindfulness is fundamentally about awareness. For it to grow, begin by observing your mental state of mind, especially when you find yourself in a stressful situation. Instead of fighting emotions, observe your mental state as those negative ones arise. Think about how you’d conduct a deep root cause analysis on an incident and apply that same rigor to yourself. The key to mindfulness is paying attention to your reaction to events without judgment. This can unlock a new way of thinking because it accepts your reaction, while still enabling you to do what is required for the job. This contrasts being stuck behind frustration or avoiding new work as it rolls in. ... Mindfulness is an individual pursuit, while creativity is an enterprise pursuit, and providing space for employees to be creative is another key to preventing burnout. But there are other benefits as well. There is a direct correlation between creativity and productivity. Teams that spend all their time working on specific processes and problems struggle to develop creative solutions that could move a company forward. 


Overcoming the Four Biggest Barriers to Machine Learning Adoption

The first hurdles with adopting AI and ML are experienced by certain businesses even before they begin. Machine learning is a vast field that pervades most aspects of AI. It paves the way for a wide range of potential applications, from advanced data analytics and computer vision to Natural Language Processing (NLP) and Intelligent Process Automation (IPA). A general rule of thumb for selecting a suitable ML use case is to “follow the money” in addition to the usual recommendations on framing the business goals – what companies expect Machine Learning to do for their business, like improving products or services, improving operational efficiency, and mitigating risk. ... The biggest obstacle to deploying AI-related technologies is corporate culture. Top management is often reluctant to take investment risks, and employees worry about losing their jobs. Businesses must start with small-scale ML use cases that demand realistic investments to achieve quick wins and persuade executives in order to assure stakeholder and employee buy-in. By providing workshops, corporate training, and other incentives, they can promote innovation and digital literacy.


Fixing Metadata’s Bad Definition

A bad definition has practical implications. It makes misunderstandings much more likely, which can infect important processes such as data governance and data modeling. Thinking about this became an annoying itch that I couldn’t scratch. What follows is my thought process working toward a better understanding of metadata and its role in today’s data landscape. The problem starts with language. Our lexicon hasn’t kept up with modern data’s complexity and nuance. There are three main issues with our current discourse about metadata: Vague language - We talk about data in terms of “data” or “metadata”. But one category encompasses the other, which makes it very difficult to differentiate between them. These broad, self-referencing terms leave the door open to being interpreted differently by different people. A gap in data taxonomy - We don’t have a name for the category of data that metadata describes, which creates a gap at the top of our data taxonomy. We need to fill it with a name for the data that metadata refers to. Metadata is contextual - The same data set can be both metadata and not metadata depending on the context. So we need to treat metadata as a role that data can play rather than a fixed category.


Addressing Privacy Challenges in Retail Media Networks

The top reason that consumers cite for mistrusting how companies handle their data is a lack of transparency. Customers know at this point that companies are collecting their data. And many of these customers won’t mind that you’re doing it, as long as you’re upfront about your intentions and give them a clear choice about whether they consent to have their data collected and shared. What’s more, recent privacy laws have increased the need for companies to shore up data security or face the consequences. In the European Union, there’s the General Data Protection Regulation (GDPR). In the U.S., laws vary by state, but California currently has the most restrictive policies thanks to the California Consumer Protection Act (CCPA). Companies that have run afoul of these laws have incurred fines as big as $800 million. Clearly, online retailers that already have — or are considering implementing — a retail media network should take notice and reduce their reliance on third-party data sources that may cause trouble from a compliance standpoint.


For Gaming Companies, Cybersecurity Has Become a Major Value Proposition

Like any other vertical industry, games companies are tasked with protecting their organizations from all nature of cybersecurity threats to their business. Many of them are large enterprises with the same concerns for the protection of internal systems, financial platforms, and employee endpoints as any other firm. "Gaming companies have the same responsibility as any other organization to protect customer privacy and preserve shareholder value. While not specifically regulated like hospitals or critical infrastructure, they must comply with laws like GDPR and CaCPA," explains Craig Burland, CISO for Inversion6, a managed security service provider and fractional CISO firm. "Threats to gaming companies also follow similar trends seen in other segments of the economy — intellectual property (IP) theft, credential theft, and ransomware." IP issues are heightened for these firms, like many in the broader entertainment category, as content leaks for highly anticipated new games or updates can give a brand a black eye at best, and at worst hit them more directly in the financials. 


Driving value from data lake and warehouse modernisation

To achieve this, Data Lakes and Data Warehouses need to grow alongside the business requirements in order to be kept efficient and up to date. Go Reply is a leading Google Cloud Platform Service integrator (SI) that is helping companies that span multiple sectors along this vital journey. Part of the Reply Group, Go Reply is a Google Cloud Premier Partner focussing on areas to include Cloud Strategy and Migration; Big Data; Machine Learning; and Compliance. With Data Modernisation capabilities in the GCP environment constantly evolving, businesses can become overwhelmed and unsure on not only next steps, but more importantly next steps for them, particularly if they don’t have in-house Google expertise. Companies often need to utilise both Data Lakes and Data Warehouses simultaneously so guidance on how to do this, as well as driving value from both kinds of storage is vital. When speaking to the Go Reply leadership team they advise that Google Cloud Platform being the hyperscale cloud of choice for these workloads, brings technology around Data Lake, and Data Warehouse efficiency, along with security superior to other market offerings.


Three tech trends on the verge of a breakthrough in 2023

The second big trend is around virtual reality, augmented reality and the metaverse. Big tech has been spending big here, and there are some suggestions that the basic technology is reaching a tipping point, even if the broader metaverse business models are, at best, still in flux. Headset technologies are starting to coalesce and the software is getting easier to use. But the biggest issue is that consumer interest and trust is still low, if only because the science fiction writers got there long ago with their dystopian view of a headset future. Building that consumer trust and explaining why people might want to engage is just as a high a priority as the technology itself. One technology trend that's perhaps closer, even though we can't see it, is ambient computing. The concept has been around for decades: the idea is that we don't need to carry tech with us because the intelligence is built into the world around us, from smart speakers to smart homes. Ambient computing is designed to vanish into the environment around us – which is perhaps why it's a trend that has remained invisible to many, at least until now.


CIOs beware: IT teams are changing

The role of IT is shifting to be more strategy-oriented, innovative, and proactive. No longer can days be spent responding to issues – instead, issues must be addressed before they impact employees, and solutions should be developed to ensure they don’t return. What does this look like? Rather than waiting for an employee to flag an issue within their system – such as recurring issues with connectivity, slow computer start time, etc. – IT can identify potential threats to workflows before they happen. They plug the holes, then they establish a strategy and framework to avoid the problem entirely in the future. In short, IT plays a critical role in successful workplace flow in both a proactive and reactive way. For those looking to start a career in IT, the onus falls on them to make suggestions and changes that look holistically at the organization and how employees interact within it. IT teams are making themselves strategic assets by thinking through how to make things more efficient and cost-effective in the long term.


A Comprehensive List of Agile Methodologies and How They Work

Extreme Programming (or XP), offers some of the best buffers against unexpected changes or late-stage customer demands. Within sprints and from the start of the business process development, feedback gathering takes place. It’s this feedback that informs everything. This means the entire team becomes accustomed to a culture of pivoting on real-world client demands and outcomes that would otherwise threaten to derail a project and seriously warp lead time production. Any organization with a client-based focus will understand the tightrope that can exist between external demands and internal resources. Continuously orienting those resources based on external demands as they appear is the single most efficient way to achieve harmony. This is something that XP does organically once integrated into your development culture. ,,. Trimming the fat from the development process is what this method is all about. If something doesn’t add immediate value, or tasks within tasks seem to be piling up, the laser focus of Lean Development steps in.



Quote for the day:

"Confident and courageous leaders have no problems pointing out their own weaknesses and ignorance. " -- Thom S. Rainer