Showing posts with label modernisation. Show all posts
Showing posts with label modernisation. Show all posts

Daily Tech Digest - June 30, 2025


Quote for the day:

"Sheep are always looking for a new shepherd when the terrain gets rocky." -- Karen Marie Moning


The first step in modernization: Ditching technical debt

At a high level, determining when it’s time to modernize is about quantifying cost, risk, and complexity. In dollar terms, it may seem as simple as comparing the expense of maintaining legacy systems versus investing in new architecture. But the true calculation includes hidden costs, like the developer hours lost to patching outdated systems, and the opportunity cost of not being able to adapt quickly to business needs. True modernization is not a lift-and-shift — it’s a full-stack transformation. That means breaking apart monolithic applications into scalable microservices, rewriting outdated application code into modern languages, and replacing rigid relational data models with flexible, cloud-native platforms that support real-time data access, global scalability, and developer agility. Many organizations have partnered with MongoDB to achieve this kind of transformation. ... But modernization projects are usually a balancing act, and replacing everything at once can be a gargantuan task. Choosing how to tackle the problem comes down to priorities, determining where pain points exist and where the biggest impacts to the business will be. The cost of doing nothing will outrank the cost of doing something.


Is Your CISO Ready to Flee?

“A well-funded CISO with an under-resourced security team won’t be effective. The focus should be on building organizational capability, not just boosting top salaries.” While Deepwatch CISO Chad Cragle believes any CISO just in the role for the money has “already lost sight of what really matters,” he agrees that “without the right team, tools, or board access, burnout is inevitable.” Real impact, he contends, “only happens when security is valued and you’re empowered to lead.” Perhaps that stands as evidence that SMBs that want to retain their talent or attract others should treat the CISO holistically. “True professional fulfillment and long-term happiness in the CISO role stems from the opportunities for leadership, personal and professional growth, and, most importantly, the success of the cybersecurity program itself,” says Black Duck CISO Bruce Jenkins. “When cyber leaders prioritize the development and execution of a comprehensive, efficient, and effective program that delivers demonstrable value to the business, appropriate compensation typically follows as a natural consequence.” Concerns around budget constraints is that all CISOs at this point (private AND public sector) have been through zero-based budget reviews several times. If the CISO feels unsafe and unable to execute, they will be incentivized to find a safer seat with an org more prepared to invest in security programs.


AI is learning to lie, scheme, and threaten its creators

For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." ... "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around." Researchers are exploring various approaches to address these challenges.


The network is indeed trying to become the computer

Think of the scale-up networks such as the NVLink ports and NVLink Switch fabrics that are part and parcel of an GPU accelerated server node – or, these days, a rackscale system like the DGX NVL72 and its OEM and ODM clones. These memory sharing networks are vital for ever-embiggening AI training and inference workloads. As their parameter counts and token throughput requirements both rise, they need ever-larger memory domains to do their work. Throw in a mixture of expert models and the need for larger, fatter and faster scale-up networks, as they are now called, is obvious even to an AI model with only 7 billion parameters. ... Then there is the scale-out network, which is used to link nodes in distributed systems to each other to share work in a less tightly coupled way than the scale-up network affords. This is the normal networking we are familiar with in distributed HPC systems, which is normally Ethernet or InfiniBand and sometimes proprietary networks like those from Cray, SGI, Fujitsu, NEC, and others from days gone by. On top of this, we have the normal north-south networking stack that allows people to connect to systems and the east-west networks that allow distributed corporate systems running databases, web infrastructure, and other front-office systems to communicate with each other. 


What Can We Learn From History’s Most Bizarre Software Bugs?

“It’s never just one thing that causes failure in complex systems.” In risk management, this is known as the Swiss cheese model, where flaws that occur in one layer aren’t as dangerous as deeper flaws overlapping through multiple layers. And as the Boeing crash proves, “When all of them align, that’s what made it so deadly.” It is difficult to test for every scenario. After all, the more inputs you have, the more possible outputs — and “this is all assuming that your system is deterministic.” Today’s codebases are massive, with many different contributors and entire stacks of infrastructure. “From writing a piece of code locally to running it on a production server, there are a thousand things that could go wrong.” ... It was obviously a communication failure, “because NASA’s navigation team assumed everything was in metric.” But you also need to check the communication that’s happening between the two systems. “If two systems interact, make sure they agree on formats, units, and overall assumptions!” But there’s another even more important lesson to be learned. “The data had shown inconsistencies weeks before the failure,” Bajić says. “NASA had seen small navigation errors, but they weren’t fully investigated.”


Europe’s AI strategy: Smart caution or missed opportunity?

Companies in Europe are spending less on AI, cloud platforms, and data infrastructure. In high-tech sectors, productivity growth in the U.S. has far outpaced Europe. The report argues that AI could help close the gap, but only if it is used to redesign how businesses operate. Using AI to automate old processes is not enough. ... Feinberg also notes that many European companies assumed AI apps would be easier to build than traditional software, only to discover they are just as complex, if not more so. This mismatch between expectations and reality has slowed down internal projects. And the problem isn’t unique to Europe. As Oliver Rochford, CEO of Aunoo AI, points out, “AI project failure rates are generally high across the board.” He cites surveys from IBM, Gartner, and others showing that anywhere from 30 to 84 percent of AI projects fail or fall short of expectations. “The most common root causes for AI project failures are also not purely technical, but organizational, misaligned objectives, poor data governance, lack of workforce engagement, and underdeveloped change management processes. Apparently Europe has no monopoly on those.”


A Developer’s Guide to Building Scalable AI: Workflows vs Agents

Sometimes, using an agent is like replacing a microwave with a sous chef — more flexible, but also more expensive, harder to manage, and occasionally makes decisions you didn’t ask for. ... Workflows are orchestrated. You write the logic: maybe retrieve context with a vector store, call a toolchain, then use the LLM to summarize the results. Each step is explicit. It’s like a recipe. If it breaks, you know exactly where it happened — and probably how to fix it. This is what most “RAG pipelines” or prompt chains are. Controlled. Testable. Cost-predictable. The beauty? You can debug them the same way you debug any other software. Stack traces, logs, fallback logic. If the vector search fails, you catch it. If the model response is weird, you reroute it. ... Agents, on the other hand, are built around loops. The LLM gets a goal and starts reasoning about how to achieve it. It picks tools, takes actions, evaluates outcomes, and decides what to do next — all inside a recursive decision-making loop. ... You can’t just set a breakpoint and inspect the stack. The “stack” is inside the model’s context window, and the “variables” are fuzzy thoughts shaped by your prompts. When something goes wrong — and it will — you don’t get a nice red error message. 


Leveraging Credentials As Unique Identifiers: A Pragmatic Approach To NHI Inventories

Most teams struggle with defining NHIs. The canonical definition is simply "anything that is not a human," which is necessarily a wide set of concerns. NHIs manifest differently across cloud providers, container orchestrators, legacy systems, and edge deployments. A Kubernetes service account tied to a pod has distinct characteristics compared to an Azure managed identity or a Windows service account. Every team has historically managed these as separate concerns. This patchwork approach makes it nearly impossible to create a consistent policy, let alone automate governance across environments. ... Most commonly, this takes the form of secrets, which look like API keys, certificates, or tokens. These are all inherently unique and can act as cryptographic fingerprints across distributed systems. When used in this way, secrets used for authentication become traceable artifacts tied directly to the systems that generated them. This allows for a level of attribution and auditing that's difficult to achieve with traditional service accounts. For example, a short-lived token can be directly linked to a specific CI job, Git commit, or workload, allowing teams to answer not just what is acting, but why, where, and on whose behalf.


How Is AI Really Impacting Jobs In 2025?

Pessimists warn of potential mass unemployment leading to societal collapse. Optimists predict a new age of augmented working, making us more productive and freeing us to focus on creativity and human interactions. There are plenty of big-picture forecasts. One widely-cited WEF prediction claims AI will eliminate 92 million jobs while creating 170 million new, different opportunities. That doesn’t sound too bad. But what if you’ve worked for 30 years in one of the jobs that’s about to vanish and have no idea how to do any of the new ones? Today, we’re seeing headlines about jobs being lost to AI with increasing frequency. And, from my point of view, not much information about what’s being done to prepare society for this potentially colossal change. ... An exacerbating factor is that many of the roles that are threatened are entry-level, such as junior coders or designers, or low-skill, including call center workers and data entry clerks. This means there’s a danger that AI-driven redundancy will disproportionately hit economically disadvantaged groups. There’s little evidence so far that governments are prioritizing their response. There have been few clearly articulated strategies to manage the displacement of jobs or to protect vulnerable workers.


AGI vs. AAI: Grassroots Ingenuity and Frugal Innovation Will Shape the Future

One way to think of AAI is as intelligence that ships. Vernacular chatbots, offline crop-disease detectors, speech-to-text tools for courtrooms: examples of similar applications and products, tailored and designed for specific sectors, are growing fast. ... If the search for AGI is reminiscent of a cash-rich unicorn aiming for growth at all costs, then AAI is more scrappy. Like a bootstrapped startup that requires immediate profitability, it prizes tangible impact over long-term ambitions to take over the world. The aspirations—and perhaps the algorithms themselves—may be more modest. Still, the context makes them potentially transformative: if reliable and widely adopted, such systems could reach millions of users who have until now been on the margins of the digital economy. ... All this points to a potentially unexpected scenario, one in which the lessons of AI flow not along the usual contours of global geopolitics and economic power—but percolate rather upward, from the laboratories and pilot programs of the Global South toward the boardrooms and research campuses of the North. This doesn’t mean that the quest for AGI is necessarily misguided. It’s possible that AI may yet end up redefining intelligence.

Daily Tech Digest - April 30, 2025


Quote for the day:

"You can’t fall if you don’t climb. But there’s no joy in living your whole life on the ground." -- Unknown


Common Pitfalls and New Challenges in IT Automation

“You don’t know what you don’t know and can’t improve what you can’t see. Without process visibility, automation efforts may lead to automating flawed processes. In effect, accelerating problems while wasting both time and resources and leading to diminished goodwill by skeptics,” says Kerry Brown, transformation evangelist at Celonis, a process mining and process intelligence provider. The aim of automating processes is to improve how the business performs. That means drawing a direct line from the automation effort to a well-defined ROI. ... Data is arguably the most boring issue on IT’s plate. That’s because it requires a ton of effort to update, label, manage and store massive amounts of data and the job is never quite done. It may be boring work, but it is essential and can be fatal if left for later. “One of the most significant mistakes CIOs make when approaching automation is underestimating the importance of data quality. Automation tools are designed to process and analyze data at scale, but they rely entirely on the quality of the input data,” says Shuai Guan, co-founder and CEO at Thunderbit, an AI web scraper tool. ... "CIOs often fall into the trap of thinking automation is just about suppressing noise and reducing ticket volumes. While that’s one fairly common use case, automation can offer much more value when done strategically,” says Erik Gaston


Outmaneuvering Tariffs: Navigating Disruption with Data-Driven Resilience

The fact that tariffs are coming was expected – President Donald Trump campaigned promising tariffs – but few could have expected their severity (145% on Chinese imports, as of this writing) and their pace of change (prohibitively high “reciprocal” tariffs on 100+ countries, only to be temporarily rescinded days later). Also unpredictable were second-order effects such as stock and bond market reactions, affecting the cost of capital, and the impact on consumer demand, due to the changing expectations of inflation or concerns of job loss. ... Most organizations will have fragmented views of data, including views of all of the components that come from a given supplier or are delivered through a specific transportation provider. They may have a product-centric view that includes all suppliers that contribute all of the components of a given product. But this data often resides in a variety of supplier-management apps, procurement apps, demand forecasting apps, and other types of apps. Some may be consolidated into a data lakehouse or a cloud data warehouse to enable advanced analytics, but the time required by a data engineering team to build the necessary data pipelines from these systems is often multiple days or weeks, and such pipelines will usually only be implemented for scenarios that the business expects will be stable over time.


The state of intrusions: Stolen credentials and perimeter exploits on the rise, as phishing wanes

What’s worrying is that in over half of intrusions (57%) the victim organizations learned about the compromise of their networks and systems from a third-party rather than discovering them through internal means. In 14% of cases, organizations were notified directly by attackers, usually in the form of ransom notes, but 43% of cases involved external entities such as a cybersecurity company or law enforcement agencies. The average time attackers spent inside a network until being discovered last year was 11 days, a one-day increase over 2023, though still a major improvement versus a decade ago when the average discovery time was 205 days. Attacker dwell time, as Mandiant calls it, has steadily decreased over the years, which is a good sign ... In terms of ransomware, the most common infection vector observed by Mandiant last year were brute-force attacks (26%), such as password spraying and use of common default credentials, followed by stolen credentials and exploits (21% each), prior compromises resulting in sold access (15%), and third-party compromises (10%). Cloud accounts and assets were compromised through phishing (39%), stolen credentials (35%), SIM swapping (6%), and voice phishing (6%). Over two-thirds of cloud compromises resulted in data theft and 38% were financially motivated with data extortion, business email compromise, ransomware, and cryptocurrency fraud being leading goals.


Three Ways AI Can Weaken Your Cybersecurity

“Slopsquatting” is a fresh AI take on “typosquatting,” where ne’er-do-wells spread malware to unsuspecting Web travelers who happen to mistype a URL. With slopsquatting, the bad guys are spreading malware through software development libraries that have been hallucinated by GenAI. ... While it is still unclear whether the bad guys have weaponized slopsquatting yet, GenAI’s tendency to hallucinate software libraries is perfectly clear. Last month, researchers published a paper that concluded that GenAI recommends Python and JavaScript libraries that don’t exist about one-fifth of the time. ... Like the SQL injection attacks that plagued early Web 2.0 warriors who didn’t adequately validate database input fields, prompt injections involve the surreptitious injection of a malicious prompt into a GenAI-enabled application to achieve some goal, ranging from information disclosure and code execution rights. Mitigating these sorts of attacks is difficult because of the nature of GenAI applications. Instead of inspecting code for malicious entities, organizations must investigate the entirery of a model, including all of its weights. ... A form of adversarial AI attacks, data poisoning or data manipulation poses a serious risk to organizations that rely on AI. According to the security firm CrowdStrike, data poisoning is a risk to healthcare, finance, automotive, and HR use cases, and can even potentially be used to create backdoors.


AI Has Moved From Experimentation to Execution in Enterprise IT

According to the SOAS report, 94% of organisations are deploying applications across multiple environments—including public clouds, private clouds, on-premises data centers, edge computing, and colocation facilities—to meet varied scalability, cost, and compliance requirements. Consequently, most decision-makers see hybrid environments as critical to their operational flexibility. 91% cited adaptability to fluctuating business needs as the top benefit of adopting multiple clouds, followed by improved app resiliency (68%) and cost efficiencies (59%). A hybrid approach is also reflected in deployment strategies for AI workloads, with 51% planning to use models across both cloud and on-premises environments for the foreseeable future. Significantly, 79% of organisations recently repatriated at least one application from the public cloud back to an on-premises or co-location environment, citing cost control, security concerns, and predictability. ... “While spreading applications across different environments and cloud providers can bring challenges, the benefits of being cloud-agnostic are too great to ignore. It has never been clearer that the hybrid approach to app deployment is here to stay,” said Cindy Borovick, Director of Market and Competitive Intelligence,


Trying to Scale With a Small Team? Here's How to Drive Growth Without Draining Your Resources

To be an effective entrepreneur or leader, communication is key, and being able to prioritize initiatives that directly align with the overall strategic vision ensures that your lean team is working on projects that have the greatest impact. Integrate key frameworks such as Responsible, Accountable, Consulted, and Informed (RACI) and Objectives and Key Results (OKRs) to maintain transparency, focus and measure progress. By focusing efforts on high-impact activities, your lean team can achieve high success and significant results without the unnecessary strain usually attributable to early-stage organizations. ... Many think that agile methodologies are only for the fast-moving software development industry — but in reality, the frameworks are powerful tools for lean teams in any industry. Encouraging the right culture is key where quick pivots, regular genuine feedback loops and leadership that promotes continuous improvement are part of the everyday workflows. This agile mindset, when adopted early, helps teams rapidly respond to market changes and client issues. ... Trusting others builds rapport. Assigning clear ownership of tasks while allowing those team members the autonomy to execute the strategies creatively and efficiently, while also allowing them to fail, is how trust is created.


Effecting Culture Changes in Product Teams

Depending on the organization, the responsibility of successfully leading a culture shift among the product team could fall to various individuals – the CPO, VP of product development, product manager, etc. But regardless of the specific title, to be an effective leader, you can’t assume you know all the answers. Start by having one-to-one conversations with numerous members on the product/engineering team. Ask for their input and understand, from their perspective, what is working, what’s not working, and what ideas they have for how to accelerate product release timelines. After conducting one-to-one discussions, sit down and correlate the information. Where are the common denominators? Did multiple team members make the same suggestions? Identify the roadblocks that are slowing down the product team or standing in the way of delivering incremental value on a more regular basis. In many cases, tech leaders will find that their team already knows how to fix the issue – they just need permission to do things a bit differently and adjust company policies/procedures to better support a more accelerated timeline. Talking one-on-one with team members also helps resolve any misunderstandings around why the pace of work must change as the company scales and accumulates more customers. Product engineers often have a clear vision of what the end product should entail, and they want to be able to deliver on that vision.


Microsoft Confirms Password Spraying Attack — What You Need To Know

The password spraying attack exploited a command line interface tool called AzureChecker to “download AES-encrypted data that when decrypted reveals the list of password spray targets,” the report said. It then, to add salt to the now open wound, accepted an accounts.txt file containing username and password combinations used for the attack, as input. “The threat actor then used the information from both files and posted the credentials to the target tenants for validation,” Microsoft explained. The successful attack enabled the Storm-1977 hackers to then leverage a guest account in order to create a compromised subscription resource group and, ultimately, more than 200 containers that were used for cryptomining. ... Passwords are no longer enough to keep us safe online. That’s the view of Chris Burton, head of professional services at Pentest People, who told me that “where possible, we should be using passkeys, they’re far more secure, even if adoption is still patchy.” Lorri Janssen-Anessi, director of external cyber assessments at BlueVoyant is no less adamant when it comes to going passwordless. ... And Brian Pontarelli, CEO of FusionAuth, said that the teams who are building the future of passwords are the same ones that are building and managing the login pages of their apps. “Some of them are getting rid of passwords entirely,” Pontarelli said


The secret weapon for transformation? Treating it like a merger

Like an IMO, a transformation office serves as the conductor — setting the tempo, aligning initiatives and resolving portfolio-level tensions before they turn into performance issues. It defines the “music” everyone should be playing: a unified vision for experience, business architecture, technology design and most importantly, change management. It also builds connective tissue. It doesn’t just write the blueprint — it stays close to initiative or project leads to ensure adherence, adapts when necessary and surfaces interdependencies that might otherwise go unnoticed. ... What makes the transformation office truly effective isn’t just the caliber of its domain leaders — it’s the steering committee of cross-functional VPs from core business units and corporate functions that provides strategic direction and enterprise-wide accountability. This group sets the course, breaks ties and ensures that transformation efforts reflect shared priorities rather than siloed agendas. Together, they co-develop and maintain a multi-year roadmap that articulates what capabilities the enterprise needs, when and in what sequence. Crucially, they’re empowered to make decisions that span the legacy seams of the organization — the gray areas where most transformations falter. In this way, the transformation office becomes more than connective tissue; it becomes an engine for enterprise decision-making.


Legacy Modernization: Architecting Real-Time Systems Around a Mainframe

When traffic spikes hit our web portal, those requests would flow through to the mainframe. Unlike cloud systems, mainframes can't elastically scale to handle sudden load increases. This created a bottleneck that could overload the mainframe, causing connection timeouts. As timeouts increased, the mainframe would crash, leading to complete service outages with a large blast radius, hundreds of other applications which depend on the mainframe would also be impacted. This is a perfect example of the problems with synchronous connections to the mainframes. When the mainframes could be overwhelmed by a highly elastic resource like the web, the result could be failure in datastores, and sometimes that failure could result in all consuming applications failing. ... Change Data Capture became the foundation of our new architecture. Instead of batch ETLs running a few times daily, CDC streamed data changes from the mainframes in near real-time. This created what we called a "system-of-reference" - not the authoritative source of truth (the mainframe remains "system-of-record"), but a continuously updated reflection of it. The system of reference is not a proxy of the system of record, which is why our website was still live when the mainframe went down.

Daily Tech Digest - June 19, 2024

Executive Q&A: Data Quality, Trust, and AI

Data observability is the process of interrogating data as it flows through a marketing stack -- including data used to drive an AI process. Data observability provides crucial visibility that helps users both interrogate data quality and understand the level of data quality prior to building an audience or executing a campaign. Data observability is traditionally done through visual tools such as charts, graphs, and Venn diagrams, but is itself becoming AI-driven, with some marketers using natural language processing and LLMs to directly interrogate the data used to fuel AI processes. ... In a way, data silos are as much a source of great distress to AI as they are to the customer experience itself. A marketer might, for example, use a LLM to help generate amazing email subject lines, but if AI generates those subject lines knowing only what is happening in that one channel, it is limited by not having a 360-degree view of the customer. Each system might have its own concept of a customer’s identity by virtue of collecting, storing, and using different customer signals. When siloed data is updated on different cycles, marketers lose the ability to engage with a customer in the precise cadence of the customer because the silos are out of synch with a customer journey.


Only 10% of Organizations are Doing Full Observability. Can Generative AI Move the Needle?

The potential applications of Generative AI in observability are vast. Engineers could start their week by querying their AI assistant about the weekend’s system performance, receiving a concise report that highlights only the most pertinent information. This assistant could provide real-time updates on system latency or deliver insights into user engagement for a gaming company, segmented by geography and time. Imagine being able to enjoy your weekend and arrive at work with a calm and optimistic outlook on Monday morning, and essentially saying to your AI assistant: “Good morning! How did things go this weekend?” or “What’s my latency doing right now, as opposed to before the version release?” or “Can you tell me if there have been any changes in my audience, region by region, for the past 24 hours?” These interactions exemplify how Generative AI can facilitate a more conversational and intuitive approach to managing development infrastructure. It’s about shifting from sifting through data to engaging in meaningful dialogue with data, where follow-up questions and deeper insights are just a query away.


The Ultimate Roadmap to Modernizing Legacy Applications

IT leaders say they plan to spend 42 percent more on average on application modernization because it is seen as a solution to technical debt and a way for businesses to reach their digital transformation goals, according to the 2023 Gartner CIO Agenda. But even with that budget allocated, businesses still face significant challenges, such as cost constraints, a shortage of staff with appropriate technical expertise, and insufficient change management policies to unite people, processes and culture around new software. To successfully navigate the path forward, IT leaders need a strategic roadmap for application modernization. The plan should include prioritizing which apps to upgrade, aligning the effort with business objectives, getting stakeholder buy-in, mapping dependencies, creating data migration checklists and working with trusted partners to get the job done. ... “Even a minor change to the functionality of a core system can have major downstream effects, and failing to account for any dependencies on legacy apps slated for modernization can lead to system outages and business interruptions,” Hitachi Solutions notes in a post.


Is it time to split the CISO role?

In one possible arrangement, a CISO reports to the CEO and a chief security technology officer (CSTO), or technology-oriented security person, reports to the CIO. At a functional level, putting the CSTO within IT gives the CIO a chance to do more integration and collaboration and unites observability and security monitoring. At the executive level, there’s a need to understand security vulnerabilities and the CISO could assist with strategic business risk considerations, according to Oltsik. “This kind of split could bring better security oversight and more established security cultures in large organizations.” ... To successfully change focus, CISOs would need to get a handle on things like the financials and company strategy and articulate cyber controls in this framework, instead of showing up every quarter with reports and warnings. “CISOs will need to incorporate their risk taxonomy into the overall enterprise risk taxonomy,” Joshi says. In this arrangement, however, the budget could arise as a point of contention. CIO budgets tend to be very cyber heavy these days, Joshi explains, and it could be difficult to create the situation where both the CISO and CIO are peers without impacting this allocation of funds.


Empowering IIoT Transformation through Leadership Support

To gain project acceptance and ultimately ensure project success will rely heavily on identifying all key stakeholders, nurturing an on-going level of mutual trust and maintaining a strong focus on targeted end results. This involves a full disclosure of desired outcomes and a willingness to adapt to individual departmental nuances. Begin with a cross-department kickoff/planning meeting to identify interested parties, open projects, and available resources. Invite participation through a discovery meeting, focusing on establishing the core team, primary department, cross-department dependencies, and consolidating open projects or shareable resources. ... Identifying all digital data blind spots at the outset highlights the scale of the problem. While many companies have Artificial Intelligence (AI) and Business Intelligence (BI) initiatives, their success depends on the quality of the source data. Consolidating these initiatives to address digital data blind spots strengthens the data-driven business case. Once a critical mass of baselines is established, projecting Return On Investment (ROI) from both a quantification and qualification perspective becomes possible. 


Will more AI mean more cyberattacks?

Organisations are also potentially exposing themselves to cyber threats through their own use of AI. According to research by law firm Hogan Lovells, 56 per cent of compliance leaders and C-suite executives believe misuse of generative AI within their organisation is a top technology-associated risk that could impact their organisation over the next few years. Despite this, over three-quarters (78 per cent) of leaders say their organisation allows employees to use generative AI in their daily work. One of the biggest threats here is so-called ‘shadow AI’, where criminals or other actors make use of, or manipulate, AI-based programmes to cause harm. “One of the key risks lies in the potential for adversaries to manipulate the underlying code and data used to develop these AI systems, leading to the production of incorrect, biased or even offensive outcomes,” says Isa Goksu, UK and Ireland chief technology officer at Globant. “A prime example of this is the danger of prompt injection attacks. Adversaries can carefully craft input prompts designed to bypass the model’s intended functionality and trigger the generation of harmful or undesirable content.” Jow believes organisations need to wake up to the risk of such activities.


What It Takes to Meet Modern Digital Infrastructure Demands and Prepare for Any IT Disaster

As you evaluate the evolving needs of your organization’s own infrastructure demands, consider whether your network is equipped to handle a growing volume of data-intensive applications — and if your team is ready to act in the face of unexpected service interruption. The push to adopt advanced technologies like AI and automation are the main drivers of network optimization for most organizations. But the growing prevalence of volatile, uncertain, complex, and ambiguous (VUCA) situations is another reason to review your communications infrastructure’s readiness to withstand future challenges. VUCA is a catch-all term for a wide range of unpredictable and challenging situations that can impact an organization’s operations, from natural disasters to political conflict, economic instability, or cyber-attacks. ... Maintaining operational continuity and resilience in the face of VUCA events requires a combination of strategic planning, operational flexibility, technological innovation, and risk-management practices. This includes investing in technology that improves agility and resilience as well as in people who are prepared for adaptive decision-making when VUCA situations arise.


APIs Are the Building Blocks of Bank Innovation. But They Have a Risky Dark Side

A key point is that it’s not just institutions suffering. Frequently APIs used by banks draw on PII (personally identifiable information) such as social security numbers, driver’s license data, medical information and personal financial data. APIs may also handle device and location data. “While this data may not seem as sensitive as PII or payment card details at first glance, it can still be exploited by malicious actors to gain insights into a user’s behavior, preferences and movements,” the report says. “In the wrong hands, this information could be used for targeted phishing attacks, social engineering, or even physical threats.” “Everything in the financial transaction world today is going across the internet, via APIs,” says Bird. ... Bird points out that the bad guys have more than just tools from the dark web to help them do their business. Frequently they tap the same mainstream tools that bankers would use. He laughs when he recalls demonstrating to a reporter how a particular fraud would have been assisted using Excel pivot tables. The journalist didn’t think of criminals using legitimate software. “Why wouldn’t they?” said Bird.


Enterprise AI Requires a Lean, Mean Data Machine

Today’s LLMs need volume, velocity, and variety of data at a rate not seen before, and that creates complexity. It’s not possible to store the kind of data LLMs require on cache memory. High IOPS and high throughput storage systems that can scale for massive datasets are a required substratum for LLMs where millions of nodes are needed. With superpower GPUs capable of lightning-fast read storage read times, an enterprise must have a low-latency, massively parallel system that avoids bottlenecks and is designed for this kind of rigor. ... It’s crucial that these technological underpinnings of the AI era be built with cost efficiency and reduction of carbon footprint in mind. We know that training LLMs and the expansion of generative AI across industries are ramping up our carbon footprint at a time when the world desperately needs to reduce it. We know too that CIOs consistently name cost-cutting as a top priority. Pursuing a hybrid approach to data infrastructure helps ensure that enterprises have the flexibility to choose what works best for their particular requirements and what is most cost-effective to meet those needs.


Building Resilient Security Systems: Composable Security

The concept of composable security represents a shift in the approach to cybersecurity. It involves the integration of cybersecurity controls into architectural patterns, which are then implemented at a modular level. Instead of using multiple standalone security tools or technologies, composable security focuses on integrating these components to work in harmony. ... The concept of resilience in composable security is reflected in a system's ability to withstand and adapt to disruptions, maintain stability, and persevere over time. In the context of microservices architecture, individual services operate autonomously and communicate through APIs. This design ensures that if one service is compromised, it does not impact other services or the entire security system. By separating security systems, the impact of a failure in one system unit is contained, preventing it from affecting the entire system. Furthermore, composable systems can automatically scale according to workload, effectively managing increased traffic and addressing new security requirements.



Quote for the day:

"The task of leadership is not to put greatness into humanity, but to elicit it, for the greatness is already there." -- John Buchan

Daily Tech Digest - April 26, 2024

Counting the Cost: The Price of Security Neglect

In the perfect scenario, the benefits of a new security solution will reduce the risk of a cyberattack. But, it’s important to invest with the right security vendor. Any time a vendor has access to a company’s systems and data, that company must assess whether the vendor’s security measures are sufficient. The recent Okta breach highlights the significant repercussions of a security vendor breach on its customers. Okta serves as an identity provider for many organizations, enabling single sign-on (SSO). An attacker gaining access to Okta’s environment could potentially compromise user accounts of Okta customers. Without additional access protection layers, customers may become vulnerable to hackers aiming to steal data, deploy malware, or carry out other malicious activities. When evaluating the privacy risks of security investments, it’s important to consider an organization’s security track record and certification history. ... Security and privacy leaders can bolster their case for additional investments by highlighting costly data breaches, and can tilt the scale in their favor by seeking solutions with strong records in security, privacy, and compliance.


Is Your Test Suite Brittle? Maybe It’s Too DRY

DRY in test code often presents a similar dilemma. While excessive duplication can make tests lengthy and difficult to maintain, misapplying DRY can lead to brittle test suites. Does this suggest that the test code warrants more duplication than the application code? A common solution to brittle tests is to use the DAMP acronym to describe how tests should be written. DAMP stands for "Descriptive and Meaningful Phrases" or "Don’t Abstract Methods Prematurely." Another acronym (we love a good acronym!) is WET: "Write Everything Twice," "Write Every Time," "We Enjoy Typing," or "Waste Everyone’s Time." The literal definition of DAMP has good intention - descriptive, meaningful phrases and knowing the right time to extract methods are essential when writing software. However, in a more general sense, DAMP and WET are opposites of DRY. The idea can be summarized as follows: Prefer more duplication in tests than you would in application code. However, the same concerns of readability and maintainability exist in application code as in test code. Duplication of concepts causes the same problems of maintainability in test code as in application code.


PCI Launches Payment Card Cybersecurity Effort in the Middle East

The PCI SSC plans to work closely with any organization that handles payments within the Middle East payment ecosystem, with a focus on security, says Nitin Bhatnagar, PCI Security Standards Council regional director for India and South Asia, who will now also oversee efforts in the Middle East. "Cyberattacks and data breaches on payment infrastructure are a global problem," he says. "Threats such as malware, ransomware, and phishing attempts continue to increase the risk of security breaches. Overall, there is a need for a mindset change." The push comes as the payment industry itself faces significant changes, with alternatives to traditional payment cards taking off, and as financial fraud has grown in the Middle East. ... The Middle East is one region where the changes are most pronounced. Middle East consumers prefer digital wallets to cards, 60% to 27%, as their most preferred method of payment, while consumers in the Asia-Pacific region slightly prefer cards, 43% to 38%, according to an August 2021 report by consultancy McKinsey & Company.


4 ways connected cars are revolutionising transportation

Connected vehicles epitomize the convergence of mobility and data-driven technology, heralding a new era of transportation innovation. As cars evolve into sophisticated digital platforms, the significance of data management and storage intensifies. The storage industry must remain agile, delivering solutions that cater to the evolving needs of the automotive sector. By embracing connectivity and harnessing data effectively, stakeholders can unlock new levels of safety, efficiency, and innovation in modern transportation. ... Looking ahead, connected cars are poised to transform transportation even further. As vehicles become more autonomous and interconnected, the possibilities for innovation are limitless. Autonomous driving technologies will redefine personal mobility, enabling efficient and safe transportation solutions. Data-driven services will revolutionise vehicle ownership, offering personalised experiences tailored to individual preferences. Furthermore, the integration of connected vehicles with smart cities will pave the way for sustainable and efficient urban transportation networks.


Looking outside: How to protect against non-Windows network vulnerabilities

Security administrators running Microsoft systems spend a lot of time patching Windows components, but it’s also critical to ensure that you place your software review resources appropriately – there’s more out there to worry about than the latest Patch Tuesday. ... Review the security and patching status of your edge, VPN, remote access, and endpoint security. Each of these endpoint software has been used as an entryway into many governments and corporate networks. Be prepared to immediately patch and or disable any of these software tools at a moment’s notice should the need arise. Ensure that you have a team dedicated to identifying and tracking resources to help alert you to potential vulnerabilities and attacks. Resources such as CISA can keep you alerted as can making sure you are signed up for various security and vendor alerts as well as having staff that are aware of the various security discussions online. These edge devices and software should always be kept up to date and you should review life cycle windows as well as newer technology and releases that may decrease the number of emergency patching sessions your Edge team finds themselves in.


Application Delivery Controllers: A Key to App Modernization

As the infrastructure running our applications has grown more complex, the supporting systems have evolved to be more sophisticated. Load balancers, for example, have been largely superseded by application delivery controllers (ADCs). These devices are usually placed in a data center between the firewall and one or more application servers, an area known as the demilitarized zone (DMZ). While first-generation ADCs primarily handled application acceleration and load balancing between servers, modern enterprise ADCs have considerably expanded capabilities and have evolved into feature-rich platforms. Modern ADCs include such capabilities as traffic shaping, SSL/TLS offloading, web application firewalls (WAFs), DNS, reverse proxies, security analytics, observability and more. They have also evolved from pure hardware form factors to a mixture of hardware and software options. One leader of this evolution is NetScaler, which started more than 20 years ago as a load balancer. In the late 1990s and early 2000s, it handled the majority of internet traffic. 


Curbing shadow AI with calculated generative AI deployment

IT leaders countered by locking down shadow IT or making uneasy peace with employees consuming their preferred applications and compute resources. Sometimes they did both. Meanwhile, another unseemly trend unfolded, first slowly, then all at once. Cloud consumption became unwieldy and costly, with IT shooting itself in the foot with misconfigurations and overprovisioning among other implementation errors. As they often do when investment is measured versus business value, IT leaders began looking for ways to reduce or optimize cloud spending. Rebalancing IT workloads became a popular course correction as organizations realized applications may run better on premises or in other clouds. With cloud vendors backtracking on data egress fees, more IT leaders have begun reevaluating their positions. Make no mistake: The public cloud remains a fine environment for testing and deploying applications quickly and scaling them rapidly to meet demand. But it also makes organizations susceptible to unauthorized workloads. The growing democratization of AI capabilities is an IT leader’s governance nightmare. 


CIOs eager to scale AI despite difficulty demonstrating ROI, survey finds

“Today’s CIOs are working in a tornado of innovation. After years of IT expanding into non-traditional responsibilities, we’re now seeing how AI is forcing CIOs back to their core mandate,” Ken Wong, president of Lenovo’s solutions and services group, said in a statement. There is a sense of urgency to leverage AI effectively, but adoption speed and security challenges are hindering efforts. Despite the enthusiasm for AI’s transformative potential, which 80% of CIOs surveyed believe will significantly impact their businesses, the path to integration is not without its challenges. Notably, large portions of organizations are not prepared to integrate AI swiftly, which impacts IT’s ability to scale these solutions. ... IT leaders also face the ongoing challenge of demonstrating and calculating the return on investment (ROI) of technology initiatives. The Lenovo survey found that 61% of CIOs find it extremely challenging to prove the ROI of their tech investments, with 42% not expecting positive ROI from AI projects within the next year. One of the main difficulties is calculating ROI to convince CFOs to approve budgets, and this challenge is also present when considering AI adoption, according to Abhishek Gupta, CIO of Dish TV. 


AI Bias and the Dangers It Poses for the World of Cybersecurity

Without careful monitoring, these biases could delay threat detection, resulting in data leakage. For this reason, companies combine AI’s power with human intelligence to reduce the bias factor shown by AI. The empathy element and moral compass of human thinking often prevent AI systems from making decisions that could otherwise leave a business vulnerable. ... The opposite could also occur, as AI could label a non-threat as malicious activity. This could lead to a series of false positives that cannot even be detected from within the company. ... While some might argue that this is a good thing because supposedly “the algorithm works,” it could also lead to alert fatigue. AI threat detection systems were added to ease the workload in the human department, reducing the number of alerts. However, the constant red flags could cause more work for human security providers, giving them more tickets to solve than they originally had. This could lead to employee fatigue and human error and take away the attention from actual threats that could impact security.


The Peril of Badly Secured Network Edge Devices

The biggest risks involved anyone using internet-exposed Cisco Adaptive Security Appliance devices, who were five times more likely than non-ASA users to file a claim. Users of internet-exposed Fortinet devices were twice as likely to file a claim. Another risk comes in the form of Remote Desktop Protocol. Organizations with internet-exposed RDP filed 2.5 times as many claims as organizations without it, Coalition said. Mass scanning by attackers, including initial access brokers, to detect and exploit poorly protected RDP connections remains rampant. The sheer quantity of new vulnerabilities coming to light underscores the ongoing risks network edge devices pose. ... Likewise for Cisco hardware: "Several critical vulnerabilities impacting Cisco ASA devices were discovered in 2023, likely contributing to the increased relative frequency," Coalition said. In many cases, organizations fail to patch these vulnerabilities, leaving them at increased risk, including by attackers targeting the Cisco AnyConnect vulnerability, designated as CVE-2020-3259, which the vendor first disclosed in May 2020.



Quote for the day:

"Disagree and commit is a really important principle that saves a lot of arguing." -- Jeff Bezos

Daily Tech Digest - February 05, 2024

8 things that should be in a company BEC policy document

Smart boards and CEOs should demand that CISOs include BEC-specific procedures in their incident response (IR) plans, and companies should create policies that require security teams to update these IR plans regularly and test their efficacy. As a part of that, security and legal experts recommend that organizations plan for legal involvement across all stages of incident response. Legal especially should be involved in how incidents are communicated with internal and external stakeholders to ensure the organization doesn’t increase its legal liability if a BEC attack hits. “Any breach may carry legal liability, so it’s best to have the discussion before the breach and plan as much as possible to address issues in advance rather than to inadvertently take actions that either causes liability that might not otherwise have existed, or increases liability beyond what would have existed,” Reiko Feaver, a privacy and data security attorney and partner at Culhane Meadows, tells CSO. Feaver, who advises clients on BEC best practices, training and compliance, says BEC policy documents should stipulate that legal be part of the threat modeling team, analyzing potential impacts from different types of BEC attacks so the legal liability viewpoint can be folded into the response plan.


Many Employees Fear Being Replaced by AI — Here's How to Integrate It Into Your Business Without Scaring Them.

The first goal of integrating AI should be understanding the quickest way for it to start having a positive monetary benefit. While our AI project is still a work in progress, we are expecting to increase revenue anywhere from $2 million to $20 million as a result of a first round of investment of under $100,000. But to achieve that type of result, leaders need to get comfortable with AI and figure out the challenges and complexities they might encounter. ... If you are a glass-half-full kind of person, listening to the glass-half-empty kind of person offers a complementary point of view. Whenever I have ideas to really move the numbers, I tend to act fast. It is crucial that people understand that I am not fast-tracking AI integration because I am unhappy with our current process or people. It is because I am happy that I will not risk what we already have unless I am fully sold on the range of the upside — and I want to expedite the learning process to get to those benefits faster. I still want to talk to as many people as I can — employees, developers, marketing folks, product managers, external investors — both for the tone of responses and any major issues. Those red flags may be great things to consider or I need to give people more information. Either way, my response can alleviate their concerns. 


The role of AI in modernising accounting practices

Accountants, like any other professionals, have varied views on AI—some see it as a friend, appreciating its ability to automate tasks, enhance efficiency, and reduce errors. They view AI as a valuable ally, freeing up time for strategic and analytical work. On the flip side, others perceive AI as a threat, fearing job displacement and the loss of the human touch in financial decision-making. Striking a balance between leveraging AI’s benefits for efficiency while preserving the importance of human skills is crucial for successful integration into accounting practices. ... Notably, machine learning algorithms and natural language processing are gaining prominence, enabling accountants to delve into more sophisticated tasks such as intricate data analysis, anomaly detection, and the generation of actionable insights from complex datasets. As technology continues to evolve, the trajectory of AI in accounting is expected to expand further. Future developments might include more sophisticated predictive analytics, enhanced natural language understanding for improved communication, and increased automation of compliance-related tasks. 


10 ways to improve IT performance (without killing morale)

When working to improve IT performance, leaders frequently focus on the technology instead of zeroing in on the business process. “We are usually motivated to change what’s within the scope of our control because we can move more quickly and see results sooner,” says Matthew Peters, CTO at technology services firm CAI. Yet a technology-concentrated approach can create significant risk, such as breaking processes that lie outside of IT or overspending on solutions that may only perpetuate the issue that still must be resolved. ... A great way to improve IT performance while maintaining team morale is by developing a culture of collaboration, says Simon Ryan, CTO at network management and audit software firm FirstWave. “Encourage team members to communicate openly — listen to their concerns and provide opportunities for skill development,” he explains. “This strategy is advantageous because it links individual development to overall team performance, thereby fostering a sense of purpose.” Ignoring the human factor is the most common team-related blunder, Ryan says. “An overemphasis on tasks and deadlines without regard for the team’s well-being can lead to burnout and unhappiness,” he warns. 


How Digital Natives are Reshaping Data Compliance

With their forward-thinking mindsets, today's chief compliance officers are changing the perception of emerging technologies from threats to opportunities. Rather than reacting with outright bans, they thoughtfully integrate new tools into the compliance framework. This balances innovation with appropriate risk management. It also positions compliance as an enabler of progress rather than a roadblock. The benefits of this mindset are many: A forward-thinking culture that thoughtfully integrates innovations into business processes and compliance frameworks. This allows organizations to harness the benefits of technology ethically. With an opportunistic mindset, compliance teams can explore how new tools like AI, blockchain, and automation can be used to make compliance activities more effective, efficient and data driven. When seen as working alongside business leaders to evaluate risks and implement appropriate guardrails for new tech, compliance teams’ collaborative approaches enable progress and innovation. These new technologies open up possibilities to continuously improve and modernize compliance programs. An opportunity-driven perspective seizes on tech's potential.


How to choose the right NoSQL database

Before choosing a NoSQL database, it's important to be certain that NoSQL is the best choice for your needs. Carl Olofson, research vice president at International Data Corp. (IDC), says "back office transaction processing, high-touch interactive application data management, and streaming data capture" are all good reasons for choosing NoSQL. ... NoSQL databases can break down data into segments—or shards—which can be useful for large deployments running hundreds of terabytes, Yuhanna says. “Sharding is an essential capability for NoSQL to scale databases,” Yuhanna says. “Customers often look for NoSQL solutions that can automatically expand and shrink nodes in horizontally scaled clusters, allowing applications to scale dynamically.” ... Some NoSQL databases can run on-premises, some only in the cloud, while others in a hybrid cloud environment, Yuhanna says. “Also, some NoSQL has native integration with cloud architectures, such as running on serverless and Kubernetes environments,” Yuhanna says. “We have seen serverless as an essential factor for customers, especially those who want to deliver good performance and scale for their applications, but also want to simplify infrastructure management through automation.”


What’s Coming in Analytics (And How We’ll Get There)

The notion of composability is not just a buzzword; it's the cornerstone of modern application development. The industry is gradually moving towards a more composable enterprise, where modular, agile products integrate insights, data, and operations at their core. This transition facilitates the creation of innovative experiences tailored to user needs, significantly lowering development costs, accelerating time to market and fostering a thriving generative AI ecosystem. This more agile application development environment will also lead to a convergence of AI and BI, such that AI-powered embedded analytics may even supplant current BI tools. This will lead to a more data-driven culture where the business uses real-time analytics as an integral part of its daily work, enabling more proactive and predictive decision-making. ... As we advance into the future, the analytics industry is poised on the edge of a monumental shift. This evolution is akin to discovering a new, uncharted continent in the realm of data processing and complex analysis. This exploration into unknown territories will reveal analytics capabilities far beyond our current understanding.


Businesses banning or limiting use of GenAI over privacy risks

Organizations recognize the need to reassure their customers about how their data is being used. “94% of respondents said their customers would not buy from them if they did not adequately protect data,” explains Harvey Jang, Cisco VP and Chief Privacy Officer. “They are looking for hard evidence the organization can be trusted as 98% said that external privacy certifications are an important factor in their buying decisions. These stats are the highest we’ve seen in Cisco’s privacy research over the years, proving once more that privacy has become inextricably tied to customer trust and loyalty. This is even more true in the era of AI, where investing in privacy better positions organizations to leverage AI ethically and responsibly.” Despite the costs and requirements privacy laws may impose on organizations, 80% of respondents said privacy laws have positively impacted them, and only 6% said the impact has been negative. Strong privacy regulation boosts consumer confidence and trust in the organizations where they share their data. Further, many governments and organizations implement data localization requirements to keep specific data within a country or region.


4 ways to help your organization overcome AI inertia

The research suggests the tricky combination of a fearful workforce and the unpredictability of the current regulatory environment means many organizations are still stuck at the AI starting gate. As a result, not only are pilot projects thin on the ground, but so are the basic foundations -- in terms of both data frameworks and strategies -- upon which these initiatives are created. About two-fifths (41%) of data leaders said they have little or no data governance framework, which is just a percentage higher than the previous year's Maturity Index, when 40% of data leaders said they have little or no data governance framework, which is a set of standards and guidelines that enable organizations to manage their data effectively. Just over a quarter of data leaders (27%) said their organization has no data strategy at all, which is only a slight improvement on the previous year's figure (29%). "I get why not everybody's quite there yet," says Carruthers, who, as a former CDO, understands the complexities involved in strategy and governance. ... The good news is some digital leaders are making headway. Andy Moore, CDO at Bentley Motors, is focused on building the foundations for the exploitation of emerging technologies, such as AI.


Data Lineage in Modern Data Engineering

There are generally two types of data lineage, namely forward lineage and backward lineage. Forward Lineage - It is known as downstream lineage; it tracks the flow of data from its source to its destination. It outlines the path that data takes through various stages of processing, transformations, and storage until it reaches its destination. It helps developers understand how data is manipulated and transformed, aiding in the design and improvement of the overall data processing workflow and quickly identifying the point of failure. By tracing the data flow forward, developers can pinpoint where transformations or errors occurred and address them efficiently. It is essential for predicting the impact of changes on downstream processes. ... Backward Lineage -  It is also known as upstream lineage; it traces the path of data from its destination back to its source. It provides insights into the origins of the data and the various transformations it undergoes before reaching its current state. It is crucial for ensuring data quality by allowing developers to trace any issues or discrepancies back to their source. By understanding the data's journey backward, developers can identify and rectify anomalies at their origin. 



Quote for the day:

“Nobody talks of entrepreneurship as survival, but that’s exactly what it is.” -- Anita Roddick

Daily Tech Digest - January 31, 2024

Rethinking Testing in Production

With products becoming more interconnected, trying to accurately replicate third-party APIs and integrations outside of production is close to impossible. Trunk-based development, with its focus on continuous integration and delivery, acknowledges the need for a paradigm shift. Feature flags emerge as the proverbial Archimedes lever in this transformation, offering a flexible and controlled approach to testing in production. Developers can now gradually roll out features without disrupting the entire user base, mitigating the risks associated with traditional testing methodologies. Feature flags empower developers to enable a feature in production for themselves during the development phase, allowing them to refine and perfect it before exposing it to broader testing audiences. This progressive approach ensures that potential issues are identified and addressed early in the development process. As the feature matures, it can be selectively enabled for testing teams, engineering groups or specific user segments, facilitating thorough validation at each step. The logistic nightmare of maintaining identical environments is alleviated, as testing in production becomes an integral part of the development workflow.


Enterprise Architecture in the Financial Realm

Enterprise architecture emerges as the North Star guiding banks through these changes. Its role transcends being a mere operational construct; it becomes a strategic enabler that harmonizes business and technology components. A well-crafted enterprise architecture lays the foundation for adaptability and resilience in the face of digital transformation. Enterprise architecture manifests two key characteristics: unity and agility. The unity aspect inherently provides an enterprise-level perspective, where business and IT methodologies seamlessly intertwine, creating a cohesive flow of processes and data. Conversely, agility in enterprise architecture construction involves deconstruction and subsequent reconstruction, refining shared and reusable business components, akin to assembling Lego bricks. ... Quantifying the success of digital adaptation is crucial. Metrics should not solely focus on financial outcomes but also on key performance indicators reflecting the effectiveness of digital initiatives, customer satisfaction, and the agility of operational models.


Cloud Security: Stay One Step Ahead of the Attackers

The relatively easy availability of cloud-based storage can lead to a data sprawl that is uncontrolled and unmanageable. In many cases, data which must be deleted or secured is left ungoverned, as organizations are not aware of their existence. In April 2022, cloud data security firm, Cyera, found unmanaged data store copies, and snapshots or log data. The researchers from this firm found out that 60% of the data security issues present in cloud data stores were due to unsecured sensitive data. The researchers further observed that over 30% of scanned cloud data stores were ghost data, and more than 58% of these ghost data stores contained sensitive or very sensitive data. ... Despite best practices advised by cloud service providers, data breaches that originate in the cloud have only increased. IBM’s annual Cost of a Data Breach report for example, highlights that 45% of studied breaches have occurred in the cloud. What is also noteworthy is that a significant 43% of reporting organizations which have stated they are just in the early stages or have not started implementing security practices to protect their cloud environments, have observed higher breach costs.


Five Questions That Determine Where AI Fits In Your Digital Transformation Strategy

Once you understand the why and the what, only then can you consider how your organization can use insights from AI to better accomplish its goals. How will your people respond, and how will they benefit? Today’s organizations have multiple technology partners, and they may have many that are all saying they can do AI. But how will your organization work with all those partners to make an AI solution come together? Many organizations are developing AI policies to define how it can be used. Having these guardrails ensures that your organization is operating ethically, morally and legally when it comes to the use of AI. ... It’s important to consider whether your organization is truly ready for AI at an enterprise or divisional level before deciding to implement AI at scale. Pilot projects can help you determine whether the implementation is generating the intended results and better understand how end users will interact with the processes. If you can't achieve customization and personalization across the organization, AI initiatives will be much tougher to implement.


A Dive into the Detail of the Financial Data Transparency Act’s Data Standards Requirements

The act is a major undertaking for regulators and regulated firms. It is also an opportunity for the LEI, if selected, to move to another level in the US, which has been slow to adopt the identifier, and significantly increase numbers that will strengthen the Global LEI System. While industry experts suggest regulators in scope of FDTA, collectively called Financial Stability Oversight Council (FSOC) agencies, initially considered data standards including the LEI and Financial Instrument Global Identifier published by Bloomberg, they suggest the LEI is the best match for the regulation’s requirements for ‘Covered agencies to establish “common identifiers” for information reported to covered regulatory agencies, which could include transactions and financial products/instruments.” ... The selection and implementation of a reporting taxonomy is more challenging as it will require many of the regulators to abandon existing reporting practices often based on PDFs, text and CSV files, and replace these with electronic reporting and machine-readable tagging. XBRL fits the bill, say industry experts, although there has been pushback from some agencies that see the unfunded requirement for change as too great a burden.


Data Center Approach to Legacy Modernization: When is the Right Time?

Legacy systems can lead to inefficiencies in your business. If we take one of the parameters mentioned above, such as cooling, one example of inefficiency could lie within an old server that’s no longer of use but still turned on. This could be placing unneccesary strain on your cooling, thus impacting your environmental footprint. Legacy systems may no longer be the most appropriate for your business, as newer technologies emerge that offer a more efficient method of producing the same, or better, results. If you neglect this technology, you might be giving your competitors an advantage which could be costly for your business. ... A cyber-attack takes place every 39 seconds, according to one report. This puts businesses at risk of losing or compromising not only their intellectual property and assets but also their customer’s data. This could put you at risk of damaging your reputation and even facing regulation fines. One of the best reasons to invest in digital transformation is for the security of your business. Systems that no longer receive updates can become a target of cyber-attacks and act as a vulnerability within your technology infrastructure. 


4 paths to sustainable AI

Hosting AI operations at a data center that uses renewable power is a straightforward path to reduce carbon emissions, but it’s not without tradeoffs. Online translation service Deepl runs its AI functions from four co-location facilities: two in Iceland, one in Sweden, and one in Finland. The Icelandic data center uses 100% renewably generated geothermal and hydroelectric power. The cold climate also eliminates 40% or more of the total data center power needed to cool the servers because they open the windows rather than use air conditioners, says Deepl’s director of engineering Guido Simon. Cost is another major benefit, he says, with prices of five cents per KW/hour compared to about 30 cents or more in Germany. The network latency between the user and a sustainable data center can be an issue for time-sensitive applications, says Stent, but only in the inference stage, where the application provides answers to the user, rather than the preliminary training phase. Deepl, with headquarters in Cologne, Germany, found it could run both training and inference from its remote co-location facilities. “We’re looking at roughly 20 milliseconds more latency compared to a data center closer to us,” says Simon.


Can ChatGPT drive my car? The case for LLMs in autonomy

Autonomous driving is an especially challenging problem because certain edge cases require complex, human-like reasoning that goes far beyond legacy algorithms and models. LLMs have shown promise in going beyond pure correlations to demonstrating a real “understanding of the world.” This new level of understanding extends to the driving task, enabling planners to navigate complex scenarios with safe and natural maneuvers without requiring explicit training. ... Safety-critical driving decisions must be made in less than one second. The latest LLMs running in data centers can take 10 seconds or more. One solution to this problem is hybrid-cloud architectures that supplement in-car compute with data center processing. Another is purpose-built LLMs that compress large models into form factors small enough and fast enough to fit in the car. Already we are seeing dramatic improvements in optimizing large models. Mistral 7B and Llama 2 7B have demonstrated performance rivaling GPT-3.5 with an order of magnitude fewer parameters (7 billion vs. 175 billion). Moore’s Law and continued optimizations should rapidly shift more of these models to the edge.


The Race to AI Implementation: 2024 and Beyond

The biggest problem is that the competitive and product landscape will be undergoing massive flux, so picking a strategic solution will be increasingly difficult. Younger companies that are less likely to be able to handle the speed of these advancements should focus on openness so that if they fail, someone else can pick up support, interoperability, and compatibility. If you aren’t locked into a single vendor’s solution and can mix and match as needed, you can move on or off a platform based on your needs. Like any new technology, take advice about hardware selection from the platform supplier. This means that if you are using ChatGPT, you want to ask OpenAI for advice about new hardware. If you are working with Microsoft or Google or any other AI developer, ask them what hardware they would recommend. ... You need a vendor that embraces all the client platforms for hybrid AI and one with a diverse, targeted solution set that individually focuses on the markets your firm is in. Right now, only Lenovo seems to have all the parts necessary thanks to its acquisition of Motorola.



Quote for the day:

"It's fine to celebrate success but it is more important to heed the lessons of failure." -- Bill Gates