Showing posts with label data literacy. Show all posts
Showing posts with label data literacy. Show all posts

Daily Tech Digest - August 02, 2025


Quote for the day:

"Successful leaders see the opportunities in every difficulty rather than the difficulty in every opportunity" -- Reed Markham


Chief AI role gains traction as firms seek to turn pilots into profits

CAIOs understand the strategic importance of their role, with 72% saying their organizations risk falling behind without AI impact measurement. Nevertheless, 68% said they initiate AI projects even if they can’t assess their impact, acknowledging that the most promising AI opportunities are often the most difficult to measure. Also, some of the most difficult AI-related tasks an organization must tackle rated low on CAIOs’ priority lists, including measuring the success of AI investments, obtaining funding and ensuring compliance with AI ethics and governance. The study’s authors didn’t suggest a reason for this disconnect. ... Though CEO sponsorship is critical, the authors also stressed the importance of close collaboration across the C-suite. Chief operating officers need to redesign workflows to integrate AI into operations while managing risk and ensuring quality. Tech leaders need to ensure that the technical stack is AI-ready, build modern data architectures and co-create governance frameworks. Chief human resource officers need to integrate AI into HR processes, foster AI literacy, redesign roles and foster an innovation culture. The study found that the factors that separate high-performing CAIOs from their peers are measurement, teamwork and authority. Successful projects address high-impact areas like revenue growth, profit, customer satisfaction and employee productivity.


Mind the overconfidence gap: CISOs and staff don’t see eye to eye on security posture

“Executives typically rely on high-level reports and dashboards, whereas frontline practitioners see the day-to-day challenges, such as limitations in coverage, legacy systems, and alert fatigue — issues that rarely make it into boardroom discussions,” she says. “This disconnect can lead to a false sense of security at the top, causing underinvestment in areas such as secure development, threat modeling, or technical skills.” ... Moreover, the CISO’s rise in prominence and repositioning for business leadership may also be adding to the disconnect, according to Adam Seamons, information security manager at GRC International Group. “Many CISOs have shifted from being technical leads to business leaders. The problem is that in doing so, they can become distanced from the operational detail,” Seamons says. “This creates a kind of ‘translation gap’ between what executives think is happening and what’s actually going on at the coalface.” ... Without a consistent, shared view of risk and posture, strategy becomes fragmented, leading to a slowdown in decision-making or over- or under-investment in specific areas, which in turn create blind spots that adversaries can exploit. “Bridging this gap starts with improving the way security data is communicated and contextualized,” Forescout’s Ferguson advises. 


7 tips for a more effective multicloud strategy

For enterprises using dozens of cloud services from multiple providers, the level of complexity can quickly get out of hand, leading to chaos, runaway costs, and other issues. Managing this complexity needs to be a key part of any multicloud strategy. “Managing multiple clouds is inherently complex, so unified management and governance are crucial,” says Randy Armknecht, a managing director and global cloud practice leader at business advisory firm Protiviti. “Standardizing processes and tools across providers prevents chaos and maintains consistency,” Armknecht says. Cloud-native application protection platforms (CNAPP) — comprehensive security solutions that protect cloud-native applications from development to runtime — “provide foundational control enforcement and observability across providers,” he says. ... Protecting data in multicloud environments involves managing disparate APIs, configurations, and compliance requirements across vendors, Gibbons says. “Unlike single-cloud environments, multicloud increases the attack surface and requires abstraction layers [to] harmonize controls and visibility across platforms,” he says. Security needs to be uniform across all cloud services in use, Armknecht adds. “Centralizing identity and access management and enforcing strong data protection policies are essential to close gaps that attackers or compliance auditors could exploit,” he says.


Building Reproducible ML Systems with Apache Iceberg and SparkSQL: Open Source Foundations

Data lakes were designed for a world where analytics required running batch reports and maybe some ETL jobs. The emphasis was on storage scalability, not transactional integrity. That worked fine when your biggest concern was generating quarterly reports. But ML is different. ... Poor data foundations create costs that don't show up in any budget line item. Your data scientists spend most of their time wrestling with data instead of improving models. I've seen studies suggesting sixty to eighty percent of their time goes to data wrangling. That's... not optimal. When something goes wrong in production – and it will – debugging becomes an archaeology expedition. Which data version was the model trained on? What changed between then and now? Was there a schema modification that nobody documented? These questions can take weeks to answer, assuming you can answer them at all. ... Iceberg's hidden partitioning is particularly nice because it maintains partition structures automatically without requiring explicit partition columns in your queries. Write simpler SQL, get the same performance benefits. But don't go crazy with partitioning. I've seen teams create thousands of tiny partitions thinking it will improve performance, only to discover that metadata overhead kills query planning. Keep partitions reasonably sized (think hundreds of megabytes to gigabytes) and monitor your partition statistics.


The Creativity Paradox of Generative AI

Before talking about AI creation ability, we need to understand a simple linguistic limitation: despite the data used for these compositions having human meanings initially, i.e., being seen as information, after being de- and recomposed in a new, unknown way, these compositions do not have human interpretation, at least for a while, i.e., they do not form information. Moreover, these combinations cannot define new needs but rather offer previously unknown propositions to the specified tasks. ... Propagandists of know-it-all AI have a theoretical basis defined in the ethical principles that such an AI should realise and promote. Regardless of how progressive they sound, their core is about neo-Marxist concepts of plurality and solidarity. Plurality states that the majority of people – all versus you – is always right (while in human history it is usually wrong), i.e., if an AI tells you that your need is already resolved in the way that the AI articulated, you have to agree with it. Solidarity is, in essence, a prohibition of individual opinions and disagreements, even just slight ones, with the opinion of others; i.e., everyone must demonstrate solidarity with all. ... The know-it-all AI continuously challenges a necessity in the people’s creativity. The Big AI Brothers think for them, decide for them, and resolve all needs; the only thing that is required in return is to obey the Big AI Brother directives.


Doing More With Your Existing Kafka

The transformation into a real-time business isn’t just a technical shift, it’s a strategic one. According to MIT’s Center for Information Systems Research (CISR), companies in the top quartile of real-time business maturity report 62% higher revenue growth and 97% higher profit margins than those in the bottom quartile. These organizations use real-time data not only to power systems but to inform decisions, personalize customer experiences and streamline operations. ... When event streams are discoverable, secure and easy to consume, they are more likely to become strategic assets. For example, a Kafka topic tracking payment events could be exposed as a self-service API for internal analytics teams, customer-facing dashboards or third-party partners. This unlocks faster time to value for new applications, enables better reuse of existing data infrastructure, boosts developer productivity and helps organizations meet compliance requirements more easily. ... Event gateways offer a practical and powerful way to close the gap between infrastructure and innovation. They make it possible for developers and business teams alike to build on top of real-time data, securely, efficiently and at scale. As more organizations move toward AI-driven and event-based architectures, turning Kafka into an accessible and governable part of your API strategy may be one of the highest-leverage steps you can take, not just for IT, but for the entire business.


Meta-Learning: The Key to Models That Can "Learn to Learn"

Meta-learning is a field within machine learning that focuses on algorithms capable of learning how to learn. In traditional machine learning, an algorithm is trained on a specific dataset and becomes specialized for that task. In contrast, meta-learning models are designed to generalize across tasks, learning the underlying principles that allow them to quickly adapt to new, unseen tasks with minimal data. The idea is to make machine learning systems more like humans — able to leverage prior knowledge when facing new challenges. ... This is where meta-learning shines. By training models to adapt to new situations with few examples, we move closer to creating systems that can handle the diverse, dynamic environments found in the real world. ... Meta-learning represents the next frontier in machine learning, enabling models that are adaptable and capable of generalizing across a wide range of tasks with minimal data. By making machines more capable of learning from fewer examples, meta-learning has the potential to revolutionize fields like healthcare, robotics, finance, and more. While there are still challenges to overcome, the ongoing advancements in meta-learning techniques, such as few-shot learning, transfer learning, and neural architecture search, are making it an exciting area of research with vast potential for practical applications.


US govt, Big Tech unite to build one stop national health data platform

Under this framework, applications must support identity-proofing standards, consent management protocols, and Fast Healthcare Interoperability Resources (FHIR)-based APIs that allow for real-time retrieval of medical data across participating systems. The goal, according to CMS Administrator Chiquita Brooks-LaSure, is to create a “unified digital front door” to a patient’s health records that are accessible from any location, through any participating app, at any time. This unprecedented public-private initiative builds on rules first established under the 2016 21st Century Cures Act and expanded by the CMS Interoperability and Patient Access Final Rule. This rule mandates that CMS-regulated payers such as Medicare Advantage organizations, Medicaid programs, and Affordable Care Act (ACA)-qualified health plans make their claims, encounter data, lab results, provider remittances, and explanations of benefits accessible through patient-authorized APIs. ... ID.me, another key identity verification provider participating in the CMS initiative, has also positioned itself as foundational to the interoperability framework. The company touts its IAL2/AAL2-compliant digital identity wallet as a gateway to streamlined healthcare access. Through one-time verification, users can access a range of services across providers and government agencies without repeatedly proving their identity.


What Is Data Literacy and Why Does It Matter?

Building data literacy in an organization is a long-term project, often spearheaded by the chief data officer (CDO) or another executive who has a vision for instilling a culture of data in their company. In a report from the MIT Sloan School of Management, experts noted that to establish data literacy in a company, it’s important to first establish a common language so everyone understands and agrees on the definition of commonly used terms. Second, management should build a culture of learning and offer a variety of modes of training to suit different learning styles, such as workshops and self-led courses. Finally, the report noted that it’s critical to reward curiosity – if employees feel they’ll get punished if their data analysis reveals a weakness in the company’s business strategy, they’ll be more likely to hide data or just ignore it. Donna Burbank, an industry thought leader and the managing director of Global Data Strategy, discussed different ways to build data literacy at DATAVERSITY’s Data Architecture Online conference in 2021. ... Focusing on data literacy will help organizations empower their employees, giving them the knowledge and skills necessary to feel confident that they can use data to drive business decisions. As MIT senior lecturer Miro Kazakoff said in 2021: “In a world of more data, the companies with more data-literate people are the ones that are going to win.”


LLMs' AI-Generated Code Remains Wildly Insecure

In the past two years, developers' use of LLMs for code generation has exploded, with two surveys finding that nearly three-quarters of developers have used AI code generation for open source projects, and 97% of developers in Brazil, Germany, and India are using LLMs as well. And when non-developers use LLMs to generate code without having expertise — so-called "vibe coding" — the danger of security vulnerabilities surviving into production code dramatically increases. Companies need to figure out how to secure their code because AI-assisted development will only become more popular, says Casey Ellis, founder at Bugcrowd, a provider of crowdsourced security services. ... Veracode created an analysis pipeline for the most popular LLMs (declining to specify in the report which ones they tested), evaluating each version to gain data on how their ability to create code has evolved over time. More than 80 coding tasks were given to each AI chatbot, and the subsequent code was analyzed. While the earliest LLMs tested — versions released in the first half of 2023 — produced code that did not compile, 95% of the updated versions released in the past year produced code that passed syntax checking. On the other hand, the security of the code has not improved much at all, with about half of the code generated by LLMs having a detectable OWASP Top-10 security vulnerability, according to Veracode.

Daily Tech Digest - April 17, 2025


Quote for the day:

"We are only as effective as our people's perception of us." -- Danny Cox



Why data literacy is essential - and elusive - for business leaders in the AI age

The rising importance of data-driven decision-making is clear but elusive. However, the trust in the data underpinning these decisions is falling. Business leaders do not feel equipped to find, analyze, and interpret the data they need in an increasingly competitive business environment. The added complexity is the convergence of macro and micro uncertainties -- including economic, political, financial, technological, competitive landscape, and talent shortage variables.  ... The business need for greater adoption of AI capabilities, including predictive, generative and agentic AI solutions, is increasing the need for businesses to have confidence and trust in their data. Survey results show that higher adoption of AI will require stronger data literacy and access to trustworthy data. ... The alarming part of the survey is that 54% of business leaders are not confident in their ability to find, analyze, and interpret data on their own. And fewer than half of business leaders are sure they can use data to drive action and decision-making, generate and deliver timely insights, or effectively use data in their day-to-day work. Data literacy and confidence in the data are two growth opportunities for business leaders across all lines of business.


Cyber threats against energy sector surge as global tensions mount

These cyber-espionage campaigns are primarily driven by geopolitical considerations, as tensions shaped by the Russo-Ukraine war, the Gaza conflict, and the U.S.’ “great power struggle” with China are projected into cyberspace. With hostilities rising, potentially edging toward a third world war, rival nations are attempting to demonstrate their cyber-military capabilities by penetrating Western and Western-allied critical infrastructure networks. Fortunately, these nation-state campaigns have overwhelmingly been limited to espionage, as opposed to Stuxnet-style attacks intended to cause harm in the physical realm. A secondary driver of increasing cyberattacks against energy targets is technological transformation, marked by cloud adoption, which has largely mediated the growing convergence of IT and OT networks. OT-IT convergence across critical infrastructure sectors has thus made networked industrial Internet of Things (IIoT) appliances and systems more penetrable to threat actors. Specifically, researchers have observed that adversaries are using compromised IT environments as staging points to move laterally into OT networks. Compromising OT can be particularly lucrative for ransomware actors, because this type of attack enables adversaries to physically paralyze energy production operations, empowering them with the leverage needed to command higher ransom sums. 


The Active Data Architecture Era Is Here, Dresner Says

“The buildout of an active data architecture approach to accessing, combining, and preparing data speaks to a degree of maturity and sophistication in leveraging data as a strategic asset,” Dresner Advisory Services writes in the report. “It is not surprising, then, that respondents who rate their BI initiatives as a success place a much higher relative importance on active data architecture concepts compared with those organizations that are less successful.” Data integration is a major component of an active data architecture, but there are different ways that users can implement data integration. According to Dresner, the majority of active data architecture practitioners are utilizing batch and bulk data integration tools, such as ETL/ELT offerings. Fewer organizations are utilizing data virtualization as the primary data integration method, or real-time event streaming (i.e. Apache Kafka) or message-based data movement (i.e. RabbitMQ). Data catalogs and metadata management are important aspects of an active data architecture. “The diverse, distributed, connected, and dynamic nature of active data architecture requires capabilities to collect, understand, and leverage metadata describing relevant data sources, models, metrics, governance rules, and more,” Dresner writes. 


How can businesses solve the AI engineering talent gap?

“It is unclear whether nationalistic tendencies will encourage experts to remain in their home countries. Preferences may not only be impacted by compensation levels, but also by international attention to recent US treatment of immigrants and guests, as well as controversy at academic institutions,” says Bhattacharyya. But businesses can mitigate this global uncertainty, to some extent, by casting their hiring net wider to include remote working. Indeed, Thomas Mackenbrock, CEO-designate of Paris headquartered BPO giant Teleperformance says that the company’s global footprint helps it to fulfil AI skills demand. “We’re not reliant on any single market [for skills] as we are present in almost 100 markets,” explains Mackenbrock. ... “The future workforce will need to combine human ingenuity with new and emerging AI technologies; going beyond just technical skills alone,” says Khaled Benkrid, senior director of education and research at Arm. “Academic institutions play a pivotal role in shaping this future workforce. By collaborating with industry to conduct research and integrate AI into their curricula, they ensure that graduates possess the skills required by the industry. “Such collaborations with industry partners keep academic programs aligned with research frontiers and evolving job market demands, creating a seamless transition for students entering the workforce,” says Benkrid.


Breaking Down the Walls Between IT and OT

“Even though there's cyber on both sides, they are fundamentally different in concept,” Ian Bramson, vice president of global industrial cybersecurity at Black & Veatch, an engineering, procurement, consulting, and construction company, tells InformationWeek. “It's one of the things that have kept them more apart traditionally.” ... “OT is looked at as having a much longer lifespan, 30 to 50 years in some cases. An IT asset, the typical laptop these days that's issued to an individual in a company, three years is about when most organization start to think about issuing a replacement,” says Chris Hallenbeck, CISO for the Americas at endpoint management company Tanium. ... The skillsets required of the teams to operate IT and OT systems are also quite different. On one side, you likely have people skilled in traditional systems engineering. They may have no idea how to manage the programmable logic controllers (PLC) commonly used in OT systems. The divide between IT and OT has been, in some ways, purposeful. The Purdue model, for example, provides a framework for segmenting ICS networks, keeping them separate from corporate networks and the internet. ... Cyberattack vectors on IT and OT environments look different and result in different consequences. “On the IT side, the impact is primarily data loss and all of the second order effects of your data getting stolen or your data getting held for ransom,” says Shankar. 


Are Return on Equity and Value Creation New Metrics for CIOs?

While driving efficiency is not a new concept for technology leaders, what is different today is the scale and significance of their efforts. In many organizations, CIOs are being tasked with reimagining how value is generated, assessed and delivered. ... Traditionally, technology ROI discussions have focused on cost savings, automation consolidation and reduced headcount. But that perspective is shifting rapidly. CIOs are now prioritizing customer acquisition, retention pricing power and speed to market. CIOs also play an integral role in product innovation than ever before. To remain relevant, they must speak the language of gross margin, not just uptime. This evolution is increasingly reflected in boardroom conversations. CIOs once presented dashboards of uptime and service-level agreements, but today, they discuss customer value, operational efficiency and platform monetization. ... In some cases, technology leaders scale too quickly before proving value. For example, expensive cloud migrations may proceed without a corresponding shift in the business model. This can result in data lakes with no clear application or platforms launched without product-market fit. These missteps can severely undermine ROE. 


AI brings order to observability disorder

Artificial intelligence has contributed to complexity. Businesses now want to monitor large language models as well as applications to spot anomalies that may contribute to inaccuracies, bias, and slow performance. Legacy observability systems were never designed for the ability to bring together these disparate sources of data. A unified observability platform leveraging AI can radically simplify the tools and processes for improved visibility and resolving problems faster, enabling the business to optimize operations based on reliable insights. By consolidating on one set of integrated observability solutions, organizations can lower costs, simplify complex processes, and enable better cross-function collaboration. “Noise overwhelms site reliability engineering teams,” says Gagan Singh, Vice President of Product Marketing at Elastic. Irrelevant and low-priority alerts can overwhelm engineers, leading them to overlook critical issues and delaying incident response. Machine learning models are ideally suited to categorizing anomalies and surfacing relevant alerts so engineers can focus on critical performance and availability issues. “We can now leverage GenAI to enable SREs to surface insights more effectively,” Singh says.


Why Most IaC Strategies Still Fail — And How To Fix Them

There are a few common reasons IaC strategies fail in practice. Let’s explore what they are, and dive into some practical, battle-tested fixes to help teams regain control, improve consistency and deliver on the original promise of IaC. ... Without a unified direction, fragmentation sets in. Teams often get locked into incompatible tooling — some using AWS CloudFormation for perceived enterprise alignment, others favoring Terraform for its flexibility. These tool silos quickly become barriers to collaboration. ... Resistance to change also plays a role. Some engineers may prefer to stick with familiar interfaces and manual operations, viewing IaC as an unnecessary complication. Meanwhile, other teams might be fully invested in reusable modules and automated pipelines, leading to fractured workflows and collaboration breakdowns. Successful IaC implementation requires building skills, bridging silos and addressing resistance with empathy and training — not just tooling. To close the gap, teams need clear onboarding plans, shared coding standards and champions who can guide others through real-world usage — not just theory. ... Drift is inevitable: manual changes, rushed fixes and one-off permissions often leave code and reality out of sync. Without visibility into those deviations, troubleshooting becomes guesswork. 


What will the sustainable data center of the future look like?

The energy issue not only affects operators/suppliers. If a customer uses a lot of energy, they will get a bill to match, says Van den Bosch. “I [as a supplier] have to provide the customer with all kinds of details about my infrastructure. That includes everything from air conditioning to the specific energy consumption of the server racks. The customer is then able to reduce that energy consumption.” This can be done, for example, by replacing servers earlier than they have been before, a departure from the upgrade cycles of yesteryear. Ruud Mulder of Dell Technologies calls for the sustainability of equipment to be made measurable in great detail. This can be done by means of a digital passport, showing where all the materials come from and how recyclable they are. He thinks there is still much room for improvement in this area. For example, future designs can be recycled better by separating plastic and gold from each other, refurbishing components and more. This yield increase is often attractive, as more computing power is required for ambitious AI plans, and the efficiency of chips increases with each generation. “The transition to AI means that you sometimes have to say goodbye to your equipment sooner,” says Mulder. The AI issue is highly relevant to the future of the modern data center in any case. 


Fitness Functions for Your Architecture

Fitness functions offer us self-defined guardrails for certain aspects of our architecture. If we stay within certain (self-chosen) ranges, we're safe (our architecture is "good"). ... Many projects already use some kinds of fitness functions, although they might not use the term. For example, metrics from static code checkers, linters, and verification tools (such as PMD, FindBugs/SpotBugs, ESLint, SonarQube, and many more). Collecting the metrics alone doesn't make it a fitness function, though. You'll need fast feedback for your developers, and you need to define clear measures: limits or ranges for tolerated violations and actions to take if a metric indicates a violation. In software architecture, we have certain architectural styles and patterns to structure our code in order to improve understandability, maintainability, replaceability, and so on. Maybe the most well-known pattern is a layered architecture with, quite often, a front-end layer above a back-end layer. To take advantage of such layering, we'll allow and disallow certain dependencies between the layers. Usually, dependencies are allowed from top to down, i.e. from the front end to the back end, but not the other way around. A fitness function for a layered architecture will analyze the code to find all dependencies between the front end and the back end.

Daily Tech Digest - January 29, 2025


Quote for the day:

"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer


Evil Models and Exploits: When AI Becomes the Attacker

A more structured threat emerges with technologies like the Model Context Protocol (MCP). Originally introduced by Anthropic, MCP allows large language models (LLMs) to interact with host machines via JavaScript APIs. This enables LLMs to perform sophisticated operations by controlling local resources and services. While MCP is being embraced by developers for legitimate use cases, such as automation and integration, its darker implications are clear. An MCP-enabled system could orchestrate a range of malicious activities with ease. Think of it as an AI-powered operator capable of executing everything from reconnaissance to exploitation. ... The proliferation of AI models is both a blessing and a curse. Platforms like Hugging Face host over a million models, ranging from state-of-the-art neural networks to poorly designed or maliciously altered versions. Amid this abundance lies a growing concern: model provenance. Imagine a widely used model, fine-tuned by a seemingly reputable maintainer, turning out to be a tool of a state actor. Subtle modifications in the training data set or architecture could embed biases, vulnerabilities or backdoors. These “evil models” could then be distributed as trusted resources, only to be weaponized later. This risk underscores the need for robust mechanisms to verify the origins and integrity of AI models.


The tipping point for Generative AI in banking

Advancements in AI are allowing banks and other fintechs to embed the technology across their entire value chain. For example, TBC is leveraging AI to make 42% of all payment reminder calls to customers with loans that are up to 30 days or less overdue and is getting ready to launch other AI-enabled solutions. Customers normally cannot differentiate the AI calls powered by our tech from calls by humans, even as the AI calls are ten times more efficient for TBC’s bottom line, compared with human operator calls. Klarna rolled out an AI assistant, which handled 2.3 million conversations in its first month of operation, which accounts for two-thirds of Klarna’s customer service chats or the workload of 700 full-time agents, the company estimated. Deutsche Bank leverages generative AI for software creation and managing adverse media, while the European neobank Bunq applies it to detect fraud. Even smaller regional players, provided they have the right tech talent in place, will soon be able to deploy Gen AI at scale and incorporate the latest innovations into their operations. Next year is set to be a watershed year when this step change will create a clear division in the banking sector between AI-enabled champions and other players that will soon start lagging behind. 


Want to be an effective cybersecurity leader? Learn to excel at change management

Security should never be an afterthought; the change management process shouldn’t be, either, says Michael Monday, a managing director in the security and privacy practice at global consulting firm Protiviti. “The change management process should start early, before changing out the technology or process,” he says. “There should be some messages going out to those who are going to be impacted letting them know, [otherwise] users will be surprised, they won’t know what’s going on, business will push back and there will be confusion.” ... “It’s often the CISO who now has to push these new things,” says Moyle, a former CISO, founding partner of the firm SecurityCurve, and a member of the Emerging Trends Working Group with the professional association ISACA. In his experience, Moyle says he has seen some workers more willing to change than others and learned to enlist those workers as allies to help him achieve his goals. ... When it comes to the people portion, she tells CISOs to “feed supporters and manage detractors.” As for process, “identify the key players for the security program and understand their perspective. There are influencers, budget holders, visionaries, and other stakeholders — each of which needs to be heard, and persuaded, especially if they’re a detractor.”


Preparing financial institutions for the next generation of cyber threats

Collaboration between financial institutions, government agencies, and other sectors is crucial in combating next-generation threats. This cooperative approach enhances the ability to detect, respond to, and mitigate sophisticated threats more effectively. Visa regularly works with international agencies of all sizes to bring cybercriminals to justice. In fact, Visa regularly works alongside law enforcement, including the US Department of Justice, FBI, Secret Service and Europol, to help identify and apprehend fraudsters and other criminals. Visa uses its AI and ML capabilities to identify patterns of fraud and cybercrime and works with law enforcement to find these bad actors and bring them to justice. ... Financial institutions face distinct vulnerabilities compared to other industries, particularly due to their role in critical infrastructure and financial ecosystems. As high-value targets, they manage large sums of money and sensitive information, making them prime targets for cybercriminals. Their operations involve complex and interconnected systems, often including legacy technologies and numerous third-party vendors, which can create security gaps. Regulatory and compliance challenges add another layer of complexity, requiring stringent data protection measures to avoid hefty fines and maintain customer trust.


Looking back to look ahead: from Deepfakes to DeepSeek what lies ahead in 2025

Enterprises increasingly turned to AI-native security solutions, employing continuous multi-factor authentication and identity verification tools. These technologies monitor behavioral patterns or other physical world signals to prove identity —innovations that can now help prevent incidents like the North Korean hiring scheme. However, hackers may now gain another inside route to enterprise security. The new breed of unregulated and offshore LLMs like DeepSeek creates new opportunities for attackers. In particular, using DeepSeek’s AI model gives attackers a powerful tool to better discover and take advantage of the cyber vulnerabilities of any organization. ... Deepfake technology continues to blur the lines between reality and fiction. ... Organizations must combat the increasing complexity of identity fraud, hackers, cyber security thieves, and data center poachers each year. In addition to all of the threats mentioned above, 2025 will bring an increasing need to address IoT and OT security issues, data protection in the third-party cloud and AI infrastructure, and the use of AI agents in the SOC. To help thwart this year’s cyber threats, CISOs and CTOs must work together, communicate often, and identify areas to minimize risks for deepfake fraud across identity, brand protection, and employee verification.


The Product Model and Agile

First, the product model is not new; it’s been out there for more than 20 years. So I have never argued that the product model is “the next new thing,” as I think that’s not true. Strong product companies have been following the product model for decades, but most companies around the world have only recently been exposed to this model, which is why so many people think of it as new. Second, while I know this irritates many people, today there are very different definitions of what it even means to be “Agile.” Some people consider SAFe as Agile. If that’s what you consider Agile, then I would say that Agile plays no part in the product model, as SAFe is pretty much the antithesis of the product model. This difference is often characterized today as “fake Agile” versus “real Agile.” And to be clear, if you’re running XP, or Kanban, or Scrum, or even none of the Agile ceremonies, yet you are consistently doing continuous deployment, then at least as far as I’m concerned, you’re running “real Agile.” Third, we should separate the principles of Agile from the various, mostly project management, processes that have been set up around those principles. ... Finally, it’s also important to point out that there is one Agile principle that might be good enough for custom or contract software work, but is not sufficient for commercial product work. This is the principle that “working software is the primary measure of progress.”


Next Generation Observability: An Architectural Introduction

It's always a challenge when creating architectural content, trying to capture real-world stories into a generic enough format to be useful without revealing any organization's confidential implementation details. We are basing these architectures on common customer adoption patterns. That's very different from most of the traditional marketing activities usually associated with generating content for the sole purpose of positioning products for solutions. When you're basing the content on actual execution in solution delivery, you're cutting out the marketing chuff. This observability architecture provides us with a way to map a solution using open-source technologies focusing on the integrations, structures, and interactions that have proven to work at scale. Where those might fail us at scale, we will provide other options. What's not included are vendor stories, which are normal in most marketing content. Those stories that, when it gets down to implementation crunch time, might not fully deliver on their promises. Let's look at the next-generation observability architecture and explore its value in helping our solution designs. The first step is always to clearly define what we are focusing on when we talk about the next-generation observability architecture.


AI SOC Analysts: Propelling SecOps into the future

Traditional, manual SOC processes already struggling to keep pace with existing threats are far outpaced by automated, AI-powered attacks. Adversaries are using AI to launch sophisticated and targeted attacks putting additional pressure on SOC teams. To defend effectively, organizations need AI solutions that can rapidly sort signals from noise and respond in real time. AI-generated phishing emails are now so realistic that users are more likely to engage with them, leaving analysts to untangle the aftermath—deciphering user actions and gauging exposure risk, often with incomplete context. ... The future of security operations lies in seamless collaboration between human expertise and AI efficiency. This synergy doesn't replace analysts but enhances their capabilities, enabling teams to operate more strategically. As threats grow in complexity and volume, this partnership ensures SOCs can stay agile, proactive, and effective. ... Triaging and investigating alerts has long been a manual, time-consuming process that strains SOC teams and increases risk. Prophet Security changes that. By leveraging cutting-edge AI, large language models, and advanced agent-based architectures, Prophet AI SOC Analyst automatically triages and investigates every alert with unmatched speed and accuracy.


Apple researchers reveal the secret sauce behind DeepSeek AI

The ability to use only some of the total parameters of a large language model and shut off the rest is an example of sparsity. That sparsity can have a major impact on how big or small the computing budget is for an AI model. AI researchers at Apple, in a report out last week, explain nicely how DeepSeek and similar approaches use sparsity to get better results for a given amount of computing power. Apple has no connection to DeepSeek, but Apple does its own AI research on a regular basis, and so the developments of outside companies such as DeepSeek are part of Apple's continued involvement in the AI research field, broadly speaking. In the paper, titled "Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models," posted on the arXiv pre-print server, lead author Samir Abnar of Apple and other Apple researchers, along with collaborator Harshay Shah of MIT, studied how performance varied as they exploited sparsity by turning off parts of the neural net. ... Abnar and team ask whether there's an "optimal" level for sparsity in DeepSeek and similar models, meaning, for a given amount of computing power, is there an optimal number of those neural weights to turn on or off? It turns out you can fully quantify sparsity as the percentage of all the neural weights you can shut down, with that percentage approaching but never equaling 100% of the neural net being "inactive."


What Data Literacy Looks Like in 2025

“The foundation of data literacy lies in having a basic understanding of data. Non-technical people need to master the basic concepts, terms, and types of data, and understand how data is collected and processed,” says Li. “Meanwhile, data literacy should also include familiarity with data analysis tools. ... “Organizations should also avoid the misconception that fostering GenAI literacy alone will help developing GenAI solutions. For this, companies need even greater investments in expert AI talent -- data scientists, machine learning engineers, data engineers, developers and AI engineers,” says Carlsson. “While GenAI literacy empowers individuals across the workforce, building transformative AI capabilities requires skilled teams to design, fine-tune and operationalize these solutions. Companies must address both.” ... “Data literacy in 2025 can’t just be about enabling employees to work with data. It needs to be about empowering them to drive real business value,” says Jain. “That’s how organizations will turn data into dollars and ensure their investments in technology and training actually pay off.” ... “Organizations can embed data literacy into daily operations and culture by making data-driven thinking a core part of every role,” says Choudhary.

Daily Tech Digest - February 01, 2024

Making the Leap From Data Governance to AI Governance

One of the AI governance challenges Regensburger is researching revolves around ensuring the veracity of outcomes, of the content that’s generated by GenAI. “It’s sort of the unknown question right now,” he says. “There’s a liability question on how you use…AI as a decision support tool. We’re seeing it in some regulations like the AI Act and President Biden’s proposed AI Bill Rights, where outcomes become really important, and it moves that into the governance sphere.” LLMs have the tendency to make things up out of whole cloth, which poses a risk to anyone who uses it. For instance, Regensburger recently asked an LLM to generate an abstract on a topic he researched in graduate school. “My background is in high energy physics,” he says. “The text it generated seemed perfectly reasonable, and it generated a series of citations. So I just decided to look at the citations. It’s been a while since I’ve been in graduate school. Maybe something had come up since then? “And the citations were completely fictitious,” he continues. “Completely. They look perfectly reasonable. They had Physics Review Letters. It had all the right formats. And at your first casual inspection it looked reasonable. 


Architecting for Industrial IoT Workloads: A Blueprint

The first step in an IIoT-enabled environment is to establish communication interfaces with the machinery. In this step, there are two primary goals: read data from machines (telemetry) and write data to machines Machines in a manufacturing plant can have legacy/proprietary communication interfaces and modern IoT sensors. Most industrial machines today are operated by programmable logic controllers (PLC). A PLC is an industrial computer ruggedized and adapted to control manufacturing processes—such as assembly lines, machines, and robotic devices — or any activity requiring high reliability, ease of programming and process fault diagnosis. However, PLCs provide limited connectivity interfaces with the external world over protocols like HTTP and MQTT, restricting external data reads (for telemetry) and writes (for control and automation). Apache PLC4X bridges this gap by providing a set of API abstractions over legacy and proprietary PLC protocols. PLC4X is an open-source universal protocol adapter for IIoT appliances that enables communication over protocols including, but not limited to, Siemens S7, Modbus, Allen Bradley, Beckhoff ADS, OPC-UA, Emerson, Profinet, BACnet and Ethernet.


6 user experience mistakes made for security and how to fix them

The challenge here is to communicate effectively with your non-experts in a way that they understand the “what” and “why” of cybersecurity. “The goal is to make it practical rather than condescending, manipulative, or punitive,” Sunshine says. “You need to take down that fear factor.” So long as people have the assurance that they can come clean and not be fired for that kind of mistake, they can help strengthen security by coming forward about problems instead of trying to cover them up. ... To achieve optimal results, you have to strike the right balance between the level of security required and the convenience of users. Much depends on the context. The bar is much higher for those who work with government entities, for example, than a food truck business, Sunshine says. Putting all the safeguards required for the most regulated industries into effect for businesses that don’t require that level of security introduces unnecessary friction. Failing to differentiate among different users and needs is the fundamental flaw of many security protocols that require everyone to use every security measure for everything.


5 New Ways Cyberthreats Target Your Bank Account

Deepfake technology, initially designed for entertainment, has evolved into a potent tool for cybercriminals. Through artificial intelligence and machine learning, these technologies fuel intricate social engineering attacks, enabling attackers to mimic trusted individuals with astonishing precision. This proficiency grants them access to critical data like banking credentials, resulting in significant financial repercussions. ... Modern phishing tactics now harness artificial intelligence to meticulously analyse extensive data pools, encompassing social media activities and corporate communications. This in-depth analysis enables the creation of highly personalised and contextually relevant messages, mimicking trusted sources like banks or financial institutions. This heightened level of customisation significantly enhances the credibility of these communications, amplifying the risk of recipients disclosing sensitive information, engaging with malicious links, or unwittingly authorising fraudulent transactions. ... Credential stuffing is a prevalent and dangerous method cybercriminals use to breach bank accounts. This attack method exploits the widespread practice of password reuse across multiple sites and services.


Italian Businesses Hit by Weaponized USBs Spreading Cryptojacking Malware

A financially motivated threat actor known as UNC4990 is leveraging weaponized USB devices as an initial infection vector to target organizations in Italy. Google-owned Mandiant said the attacks single out multiple industries, including health, transportation, construction, and logistics. "UNC4990 operations generally involve widespread USB infection followed by the deployment of the EMPTYSPACE downloader," the company said in a Tuesday report. "During these operations, the cluster relies on third-party websites such as GitHub, Vimeo, and Ars Technica to host encoded additional stages, which it downloads and decodes via PowerShell early in the execution chain." ... Details of the campaign were previously documented by Fortgale and Yoroi in early December 2023, with the former tracking the adversary under the name Nebula Broker. The infection begins when a victim double-clicks on a malicious LNK shortcut file on a removable USB device, leading to the execution of a PowerShell script that's responsible for downloading EMPTYSPACE (aka BrokerLoader or Vetta Loader) from a remote server via another intermedia PowerShell script hosted on Vimeo.


Understanding Architectures for Multi-Region Data Residency

A critical principle in the context of multi-region deployments is establishing clarity on truth and trust. While knowing the source of truth for a piece of data is universally important, it becomes especially crucial in multi-region scenarios. Begin by identifying a fundamental unit, an "atom," within which all related data resides in one region. This could be an organizational entity like a company, a team, or an organization, depending on your business structure. Any operation that involves crossing these atomic boundaries inherently becomes a cross-region scenario. Therefore, defining this atomic unit is essential in determining the source of truth for your multi-region deployment. In terms of trust, as different regions hold distinct data, communication between them becomes necessary. This could involve scenarios like sharing authentication tokens across regions. The level of trust between regions is a decision rooted in the specific needs and context of your business. Consider the geopolitical landscape if governments are involved, especially if cells are placed in regions with potentially conflicting interests.


Developing a Data Literacy Program for Your Organization

Before developing a data literacy program for an organization, it is crucial to conduct a comprehensive training needs assessment. This assessment helps in understanding the current level of data literacy within the organization and identifying areas that require improvement. It involves gathering information about employees’ existing knowledge, skills, and attitudes toward data analysis and interpretation. To conduct the needs assessment, different methods can be employed. Surveys, interviews, focus groups, or even analyzing existing data can provide valuable insights into employees’ proficiency levels and their specific learning needs. By involving various stakeholders, such as managers, department heads, and employees themselves, in this process, a holistic understanding of the organization’s requirements can be achieved. ... It is also beneficial to compare the program’s outcomes against predefined benchmarks or industry standards. This allows organizations to benchmark their progress against other similar initiatives and identify areas where further improvements are necessary. Overall, continuously evaluating the effectiveness of a data literacy program helps organizations understand its impact on individuals’ capabilities and organizational performance.


Women In Architecture: Early Insights and Reflections

The question of why there so few women in architecture is a key one in our minds. Rather than dwelling on the negative, the conversations focus on identifying the root causes to help us move into action effectively. I have learned that the answer to this question is incredibly nuanced and layered, with many interrelated factors. Some root causes for fewer women in architecture draw from the macro level context, including a similar set of challenges experienced by women in technology. However, one of the biggest contributors is the architecture profession itself and how it is presented. This has been a hard truth that has asserted itself as a common thread throughout the conversations. For example, the lack of clarity regarding the role and value proposition of architecture, often perceived as abstract, technical, and unattainable, poses a substantial barrier. ... However, there is a powerful correspondence between the momentum for more diversity in architecture and exactly what the profession needs most now. For architects of the future to thrive, it’s not enough to excel at cognitive, architectural, and technical competencies, but just as important to master the human competencies such as communication, influence, leadership, and emotional intelligence.


New York Times Versus Microsoft: The Legal Status of Your AI Training Set

One of the problems the tech industry has had from the start is product contamination using intellectual property from a competitor. The tech industry is not alone, and the problem of one company illicitly acquiring the intellectual property of another and then getting caught goes back decades. If an engineer uses generative AI that has a training set contaminated by a competitor’s intellectual property, there is a decent chance, should that competitor find out, that the resulting product will be found as infringing and be blocked from sale -- with the company that had made use of that AI potentially facing severe fines and sanctions, depending on the court’s ruling. ... Ensuring any AI solution from any vendor contains indemnification for the use of their training set or is constrained to only use data sets that have been vetted as fully under your or your vendor’s legal control should be a primary requirement for use. (Be aware that if you provide AI capabilities to others, you will find an increasing number of customers will demand indemnification.) You’ll need to ensure that the indemnification is adequate to your needs and that the data sets won’t compromise your products or services under development or in market so your revenue stream isn’t put at risk.


How to calculate TCO for enterprise software

It’s obvious that hardware, once it has reached end-of-life, needs to be disposed of properly. With software, there are costs as well, primarily associated with data export. First, data needs to be migrated from the old software to the new, which can be complex given all the dependencies and database calls that might be required for even a single business process. Then there’s backups and disaster recovery. The new software might require that data to be formatted in a different way. And you still might need to keep archived copies of certain data stores from the old system for regulatory or compliance reasons. Another wrinkle in the TCO calculation is estimating how long you plan to use the software. Are you an organization that doesn’t change tech stacks if it doesn’t have to and therefore will probably run the software for as long as it still does the job? In that case, it might make sense to do a five-year TCO analysis as well as a 10-year version. On the other hand, what if your company has an aggressive sustainability strategy that calls for eliminating all of its data centers within three years, and moving as many apps as possible to SaaS alternatives. 



Quote for the day:

"One advantage of talking to yourself is that you know at least somebody's listening." -- Franklin P. Jones

Daily Tech Digest - July 02, 2023

OpenAI, others pushing false narratives about LLMs, says Databricks CTO

“There are definitely the larger providers, like OpenAI, Google, and so on; they have this narrative – and they’re talking in a lot of places about how – first of all, this stuff is super dangerous, not in the sense of a disruptive technology, but even in the sense of ‘it might be evil and whatever’,” Zaharia told ITPro during an interview at Databricks AI and Data Summit 2023. “It’s very sci-fi.” “OpenAI – that’s exactly the narrative they’re pushing – but others as well. “Anytime someone talks about AI alignment or whatever, it’s often from this angle: Watch out, it might be evil. They’re also saying how it’s a huge amount of work to train [models]: It’s super expensive – don’t even try it. “I’m not sure either of those things are true.” Zaharia cited MosaicML – the startup Databricks recently acquired for $1.3 billion – as having trained a large language model (LLM) with 30 million parameters that’s competitive with GPT-3, and “probably cost like ten to 20 times less” to train.


Ransomware: recovering from the inevitable

There’s no doubt that businesses’ cybersecurity teams are under an immense amount of pressure in the battle against ransomware but they can only go so far alone. There must be an awareness that it simply can’t be stopped at the source, and that defending against ransomware takes a combination of people, processes and technology. The digital world can appear complex – especially in the case of large enterprise structures – so it can be helpful to stress that the digital world and the real world are not that different. Digital protections such as patching systems, multi-factor authentication, data protection and the risk of the insider threats all have real world counterparts: open windows that need to be locked at night, double locking your front door, locking away vital items in a safe, and opportunistic break ins through unlocked windows or doors. However, whilst using a combination of people, processes and technology to minimise attacks is key, some will inevitably slip through the cracks, which is where recovery comes into play.


AI Foundation launches AI.XYZ to give people their own AI assistants

The platform enables users to design their own AI assistants that can safely support them in both personal and professional settings. Each AI is unique to its creator and can assist with tasks such as note-taking, email writing, brainstorming, and offering personalized advice and perspectives. Unlike generic AI assistants from companies like Amazon, Google, Apple, or ChatGPT, each AI assistant designed on AI.XYZ belongs exclusively to its creator, knows the person’s values and goals, and provides more personalized help. The company sees a significant opportunity for workplaces and enterprises to provide each of their employees with their own AIs. ... AI.XYZ is available in public beta and can be accessed on the web with an invitation code. Creators can interact with their AIs through text, voice, and video. A free subscription to AI.XYZ allows users to get started creating their own AI, while a premium subscription for $20 per month allows additional capabilities and customization options. The AI Foundation has collaborated with top research institutions like the Technical University of Munich to create “sustainable AI” for everyone.


TDD and the Impact on Security

Outside-In Test-Driven Development (TDD) is an approach to software development that emphasizes starting the development process first by creating high-level acceptance tests or end-to-end tests that demonstrate the desired behaviour of the system from his point of view to define users or external interfaces. It is also commonly referred to as behaviour-directed development (BDD). With Outside-In TDD, the development process begins with writing a failed acceptance test that describes the desired behaviour of the system. This test is usually written from a user's perspective or a high-level component interacting with the system. The test is expected to initially fail as the system does not have the required functionality. Once the first acceptance test has been performed, the next step is to write a failing unit test for the smallest possible unit of code that will pass the acceptance test. This unit test defines the desired behaviour of a specific module or component within the system. The unit test fails because the corresponding code still needs to be implemented.


Wasm: 5 things developers should be tracking

One of Wasm’s biggest draws is its cross-platform portability. Wasm is a neutral binary format that can be shoved in a container and run anywhere. This is key in our increasingly polyglot hardware and software world. Developers hate compiling to multiple different formats because every additional architecture (x86, Arm, Z, Power, etc.) adds to your test matrix, and exploding test matrices is a very expensive problem. QE is the bottleneck for many development teams. With Wasm, you have the potential to write applications, compile them once, test them once, and deploy them on any number of hardware and software platforms that span the hybrid cloud, from the edge to your data center to public clouds. A developer on a Mac could compile a program into a Wasm binary, test it locally, and then confidently push it out to all of the different machines that it’s going to be deployed on. All of these machines will already have a Wasm runtime installed on them, one that is battle tested for that particular platform, thereby making the Wasm binaries extremely portable, much like Java.


Getting Started with Data Literacy: Two Tips for Success

How should an enterprise get started? Langer says he “came to the inescapable conclusion that data literacy must start with leaders. Data literacy isn't just for the rank-and-file.” As a litmus test when he starts talking to organizations, he asks about their leader's commitment to data literacy. “I ask them, ‘Is your organization willing to send your leaders to training -- managers, executives, the C-suite, all of them?’ If not, which is often the case, that probably tells you everything that you need to know, because data literacy is very much a cultural transformation. If your leaders aren't all in, then there's almost no point in getting started, to be frank. If employees see their managers not exhibiting a data literacy mindset and data literacy behaviors, they will revert to business as usual.” Langer admits to receiving pushback; executives wonder if data literacy is needed because newer technology such as no-code/low-code or generative AI already make it easier to gain insights.


How Data Observability Helps Shift Left Your Data Reliability

When you consider data observability, the term “shift left” refers to a proactive strategy that involves incorporating observability practices at the early stages of the data lifecycle. This concept draws inspiration from software development methodologies and emphasizes the importance of addressing potential issues and ensuring high quality right from the start. When applied to data observability, shifting left entails integrating observability practices and tools into the data pipeline and infrastructure right from the outset. This approach avoids treating observability as an afterthought or implementing it only in later stages. The primary goal is to identify and resolve data quality, integrity, and performance issues as early as possible, thereby minimizing the likelihood of problems propagating downstream. ... Taking a proactive approach to address data incidents early on enables organizations to mitigate the potential impact and cost associated with data issues. 


Architecting Real-Time Analytics for Speed and Scale

Apache Druid has emerged as the preferred database for real-time analytics applications due to its high performance and ability to handle streaming data. With its support for true stream ingestion and efficient processing of large data volumes in sub-second timeframes, even under heavy loads, Apache Druid excels in delivering fast insights on fresh data. Its seamless integration with Apache Kafka and Amazon Kinesis further solidifies its position as the go-to choice for real-time analytics. When choosing an analytics database for streaming data, considerations such as scale, latency, and data quality are crucial. The ability to handle the full-scale of event streaming, ingest and correlate multiple Kafka topics or Kinesis shards, support event-based ingestion, and ensure data integrity during disruptions are key requirements. Apache Druid not only meets these criteria but goes above and beyond to deliver on these expectations and provide additional capabilities.


Why business leaders must tackle ethical considerations as AI becomes ubiquitous

When it comes to ethical AI, there is a true balancing act. The industry as a whole has differing views on what is deemed ethical, making it unclear who should make the executive decision on whose ethics are the right ethics. However, perhaps the question to ask is whether companies are being transparent about how they are building these systems. This is the main issue we are facing today. Ultimately, although supporting regulation and legislation may seem like a good solution, even the best efforts can be thwarted in the face of fast-paced technological advancements. The future is uncertain, and it is very possible that in the next few years, a loophole or an ethical quagmire may surface that we could not foresee. This is why transparency and competition are the ultimate solutions to ethical AI today. Currently, companies compete to provide a comprehensive and seamless user experience. For example, people may choose Instagram over Facebook, Google over Bing, or Slack over Microsoft Teams based on the quality of experience. 


ChatGPT, compliance, and the impending wave of AI-fuelled content

Despite its convincing rhetoric, ChatGPT is, at times, deeply flawed. Quite simply, its statements can’t always be trusted. This is a reasonably devastating indictment for a tool which invites such vehement scrutiny, and has been acknowledged by OpenAI, who admit that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” ChatGPT has a vast wealth of knowledge because it was trained on all manner of web content, from books and academic articles to blog posts and Wikipedia entries. Alas, the internet is not a domain renowned for its factual integrity. Furthermore, ChatGPT doesn’t actually connect to the internet to track down the information it needs to respond. Instead, it simply repeats patterns it has seen in its training data. In other words, ChatGPT arrives at an answer by making a series of guesses, which is part of the reason it can argue wrong answers as if they were completely true, and give different (incorrect) answers to the same questions.



Quote for the day:

"The mediocre leader tells The good leader explains The superior leader demonstrates The great leader inspires." -- Buchholz and Roth

Daily Tech Digest - March 21, 2023

CFO Priorities This Year: Rethinking the Finance Function

Marko Horvat, Gartner VP of research, adds CFOs must transition away from optimization and start thinking about transformation. “Making things faster, more accurate, and with less effort has benefits, but each round of improvement brings diminishing returns,” he says. “CFOs must start thinking about ways to transform the function to build and enhance capabilities, such as advanced data and analytics, in order to truly unlock more value from the finance function.” Sehgal says CFOs should be asking questions including, how do we create a futuristic vision for finance? Should short-term gains override longer-term benefits? And how do we fund digital transformation with the current pressures? “CFOs are focused on elevating the role of finance in the organization to be a value integrator across the enterprise, as well as enhancing value through new strategies that not only support development but that also promote innovations for capital allocation,” he explains.


Build Software Supply Chain Trust with a DevSecOps Platform

When building an application, developers, platform operators and security professionals want to monitor vulnerabilities throughout the software supply chain. The challenge comes when multiple vulnerability scanners are used at different stages in the pipeline and different teams are notified and required to take action without proper coordination. A security-focused application platform can build in scan orchestration to not only detect vulnerabilities but also to map those findings to a workload. This feature allows developers to identify issues throughout the life cycle of their applications and help them resolve issues, shifting left the responsibility with a higher degree of automation. Moreover, the platform can build trust with security analysts by showing the performance of application developers and helping them understand the risk that teams are facing. Once a platform detects these vulnerabilities, both at build time and at runtime, it needs to help developers triage and remediate them. 


Developers, unite! Join the fight for code quality

Writing good code is a craft as much as any other, and should be regarded as such. You have every right to advocate for an environment and an operational model that respect the intricacies of what you do and the significance of the outcome. It’s important to value, and feel valued for, what you do. And not just for your own immediate happiness—it’s also a long-term investment in your career. Making things you don’t think are any good tends to wear on the psyche, which doesn’t exactly feed into a more motivated workday. In fact, a study conducted by Oxford University’s Saïd Business School found that happy workers were 13% more productive. What’s good for your craft is ultimately best for business—a conclusion both engineers and their employers can feel good about. Software plays a big role at just about every level of society—it’s how we create and process information, access goods and services, and entertain ourselves. With the advent of software-defined vehicles, it even determines how we move between physical locations.


Why data literacy matters for business success

Aligning data strategies with overall business strategy and operations is no mean feat. Chief Data Officers (CDOs) are ideal candidates in marrying together data analytics and the wider business, given their appreciation of informed decision making, and the desire to foster a data culture where internal information is properly managed and engaged with throughout the organisation. Moreover, their understanding of the technology landscape will assist when making platform and software selections. This stands to benefit all departments, who’ll gain access to the tools and skills needed to work with data and derive insights. CDOs also embody the “can do” approach to professional development, believing it’s possible to train employees in data-related skills, regardless of their technical proficiency. There’s a well-established correlation between hiring a CDO and business success, with research from Forrester suggesting 89% of organisations harnessing analytics to improve operations that appointed one to oversee the process have seen a positive business impact.


What the 'new automation' means for technology careers

AI is already playing a part in handling technology tasks. A survey released by OpsRamp finds more than 60% of companies adopting AIOps, which applies AI to monitor and improve IT operations themselves. The greatest IT operations challenge for enterprises in 2023 was automating as many operations as possible, cited by 66% of respondents. The main benefits of AIOps seen so far include reduction in open incident tickets (65%); reduction in mean time to detect or restore (56%), and automation of tedious tasks (52%). The latest IT staffing data from Janco Associates finds recent layoffs affected data center and operations staff, with business leaders looking to automate IT processes and reporting. The apparent trend here is that those pursuing careers in technology need to look higher up the stack -- at applications and business consulting. However, there's still a lot of work for people working with the plumbing and code. Unfortunately, getting to automation-driven abstraction -- especially if it involves AI -- requires some manual work up front.


How Cybersecurity Delays Critical Infrastructure Modernization

For critical infrastructure organizations, building a security strategy that works from both an operational technology (OT) and consumer data perspective is not as straightforward as it is in many other industries. Safely storing this data while implementing the latest technology has proved to be a significant challenge across the sector, meaning the service provided by these companies is being hampered. These concerns have prevented a range of technologies from being integrated quickly or at all. These technologies include renewable energy projects, electric vehicle technology, natural disaster contingencies and moving towards smarter grid solutions to replace aging infrastructure. Older operational technology becomes difficult to update and secure sufficiently while the use of third-party software also reduces the level of control organizations have over their data. In addition to this, a lack of automation increases the chances of human error, which could present opportunities to cybercriminals.


What Are Foundation AI Models Exactly?

The generative AI solution can analyze input data against 175 billion parameters and profoundly understand the written language. The smart tool can answer questions, summarize and translate text, produce articles on a given topic, write code, and much more. All you need is to provide ChatGPT with the right prompts. OpenAI’s groundbreaking product is just one example of foundation models that transform AI application development as we know it. Foundation models disrupt AI development as we know it. Instead of training multiple models for separate use cases, you can now leverage a pre-trained AI solution to enhance or fully automate tasks across multiple departments and job functions. With foundation AI models like ChatGPT, companies no longer have to train algorithms from scratch for every task they want to enhance or automate. Instead, you only need to select a foundation model that best fits your use case – and fine-tune its performance for a specific objective you’d like to achieve.


As hiring freezes and layoffs hit, tech teams struggle to do more with less

There are a number of organizational hurdles holding back employees’ learning and development, Pluralsight found. For HR and L&D directors, budget restraints and costs were identified as the biggest barriers to upskilling (30%). This was also true for technology leaders, with 15% blaming financial restraints for getting in the way of employee upskilling. For technology workers themselves, finding time to invest in their own training was identified as the main issue: 42% of workers said they were too busy to upskill, with 18% saying their manager didn’t allow any time during the week to learn new skills. As a result, 21% of tech workers feel pressured to learn outside of work hours. ... However, the report added that giving employees time to invest in their training, address skills gaps and gain valuable growth opportunities are key factors in retention. “Upskilling during work hours will hinder short-term productivity, and managers often bear the brunt of this stress. But don’t sacrifice short-term productivity for long-term success,” the report said.


CISA kicks off ransomware vulnerability pilot to help spot ransomware-exploitable flaws

CISA says it will seek out affected systems using existing services, data sources, technologies, and authorities, including CISA's Cyber Hygiene Vulnerability Scanning. CISA initiated the RVWP by notifying 93 organizations identified as running instances of Microsoft Exchange Service with a vulnerability called "ProxyNotShell," widely exploited by ransomware actors. The agency said this round demonstrated "the effectiveness of this model in enabling timely risk reduction as we further scale the RVWP to additional vulnerabilities and organizations." Eric Goldstein, executive assistant director for cybersecurity at CISA, said, "The RVWP will allow CISA to provide timely and actionable information that will directly reduce the prevalence of damaging ransomware incidents affecting American organizations. We encourage every organization to urgently mitigate vulnerabilities identified by this program and adopt strong security measures consistent with the U.S. government's guidance on StopRansomware.gov."


A Simple Framework for Architectural Decisions

Technology Radar captures techniques, platforms, tools, languages and frameworks, and their level of adoption across an organization. However, this may not cover all the needs. Establishing consistent practices for things that apply across different parts of the system can be helpful. For example, you might want to ensure all logging is done in the same format and with the same information included. Or, if you’re using a REST API, you might want to establish some conventions around how it should be designed and used, like what headers to use or how to name things. Additionally, if you’re using multiple similar technologies, it can be useful to guide when to use each one. Technology Standards define the rules for selecting and using technologies within your company. They ensure consistency, reduce the risk of adopting new technology in a suboptimal way, and drive consistency across the organization.



Quote for the day:

"Leadership is not about titles, positions, or flow charts. It is about one life influencing another." -- John C. Maxwell