Daily Tech Digest - May 31, 2024

Flawed AI Tools Create Worries for Private LLMs, Chatbots

The research underscores that the rush to integrate AI into business processes does pose risks, especially for companies that are giving LLMs and other generative-AI applications access to large repositories of data. ... The risks posed by the adoption of next-gen artificial intelligence and machine learning (AI/ML) are not necessarily due to the models, which tend to have smaller attack surfaces, but the software components and tools for developing AI applications and interfaces, says Dan McInerney, lead AI threat researcher with Protect AI, an AI application security firm. "There's not a lot of magical incantations that you can send to an LLM and have it spit out passwords and sensitive info," he says. "But there's a lot of vulnerabilities in the servers that are used to host LLMs. The [LLM] is really not where you're going to get hacked — you're going to get hacked from all the tools you use around the LLM." ... "Exploitation of this vulnerability could affect the immediate functioning of the model and can have long-lasting effects on its credibility and the security of the systems that rely on it," Synopsys stated in its advisory. 


Cyber resiliency is a key focus for us: Balaji Rao, Area VP – India & SAARC, Commvault

Referring to the classical MITRE framework, the recommendation is to “shift right” – moving focus towards recovery. After thoroughly assessing risks and implementing various tools, it’s crucial to have a solid recovery plan in place. Customers are increasingly concerned about scenarios where both their primary and disaster recovery (DR) systems are compromised by ransomware, and their backups are unavailable. According to a Microsoft report, in 98% of successful ransomware cases, backups are disabled. To address this concern, the strategy involves building a cyber resilient framework that prioritises recovery. ... For us, AI serves multiple purposes, primarily enhancing efficiency, scanning for threats, and addressing customer training and enablement needs. From a security perspective, we leverage AI extensively to detect ransomware-related risks. Its rapid data processing capabilities allow for thorough scanning across vast datasets, enabling pattern matching and identifying changes indicative of potential threats. We’ve integrated AI into our threat scanning solutions, strengthening our ability to detect and mitigate malware by leveraging comprehensive malware databases.


The importance of developing second-line leaders

Developing second-line leaders helps your business unit or function succeed at a whole new level: When your teams know that leadership development is a priority, they start preparing for future roles. The top talent will cultivate their skills and equip themselves for leadership positions, enhancing overall team performance. As the cascading effect builds, this proactive development has a multiplicative impact, especially if competition within the team remains healthy. It's also important for your personal growth as a leader: The most fulfilling aspect is the impact on yourself. Measuring your leadership success by contribution, attribution, and legacy, developing capable successors fulfils all three criteria. It ensures you contribute effectively, gain recognition for building strong teams, and leave a lasting legacy through the leaders you've developed. ... It starts with the self. Begin with delegation without abdication or evasion of accountability. This skill is a cornerstone of effective leadership, involving the entrusting of responsibilities to others while empowering them to assume ownership and make informed decisions.


Navigating The AI Revolution: Balancing Risks And Opportunities

Effective trust management requires specific approaches, such as robust monitoring systems, rigorous auditing processes and well-defined incident response plans. More importantly, in order for any initiative to address AI risks to be successful, we as an industry need to build a workforce of trained professionals. Those operating in the digital trust domain, including cybersecurity, privacy, assurance, risk and governance of digital technology, need to understand AI before building controls around it. The ISACA AI survey revealed that 85% of digital trust professionals say they will need to increase their AI skills and knowledge within two years to advance or retain their jobs. This highlights the importance of continuous learning and adaptation for cybersecurity professionals in the era of AI. Gaining a deeper understanding of how AI-powered attacks are altering the threat landscape, along with how AI can be effectively utilized by security practitioners, will be essential. As security professionals learn more about AI, they need to ensure that the methods being deployed align with an enterprise’s overarching need to maintain trust with its stakeholders.


CISO‘s Guide to 5G Security: Risks, Resilience and Fortifications

A strong security posture requires granular visibility into 5G traffic and automated security enforcement to effectively thwart attackers, protect critical services, and safeguard against potential threats to assets and the environment. This includes a focus on detecting and preventing attacks at all layers, interface and threat vector — from equipment (PEI) and subscriber (SUPI) identification, applications, signaling, data, network slices, malware, ransomware and more. ... To accomplish the task at hand brought about by 5G, CISOs must be prepared to provide a swift response to known and unknown threats in real time with advanced AI and machine learning, automation and orchestration tools. As connotation shifts from viewing 4G as a more consumer-focused mobile network to the power of private 5G when embedded across enterprise infrastructure, any kind of lateral network movement can bring about damage. ... Strategy and solution start with zero trust and can go as far as an entire 5G SOC dedicated to the nuances brought about by the next-gen network. The change and progress 5G promises is only as significant as our ability to protect networks and infrastructure from malicious actors, threats, and attacks.


Cloud access security brokers (CASBs): What to know before you buy

CASBs sit between an organization’s endpoints and cloud resources, acting as a gateway that monitors everything that goes in or out, providing visibility into what users are doing in the cloud, enforcing access control policies, and looking out for security threats. ... The original use case for CASBs was to address shadow IT. When security execs deployed their first CASB tools, they were surprised to discover how many employees had their own personal cloud storage accounts, where they squirreled away corporate data. CASB tools can help security teams discover and monitor unauthorized or unmanaged cloud services being used by employees. ... Buying a CASB tool can be complex. There’s a laundry list of possible features that fall within the broad CASB definition (DLP, SWG, etc.) And CASB tools themselves are part of a larger trend toward SSE and SASE platforms that include features such as ZTNA or SD-WAN. Enterprises need to identify their specific pain points — whether that’s regulatory compliance or shadow IT — and select a vendor that meets their immediate needs and can also grow with the enterprise over time.


What is model quantization? Smaller, faster LLMs

Why do we need quantization? The current large language models (LLMs) are enormous. The best models need to run on a cluster of server-class GPUs; gone are the days where you could run a state-of-the-art model locally on one GPU and get quick results. Quantization not only makes it possible to run a LLM on a single GPU, it allows you to run it on a CPU or on an edge device. ... As you might expect, accuracy may be an issue when you quantize a model. You can evaluate the accuracy of a quantized model against the original model, and decide whether the quantized model is sufficiently accurate for your purposes. For example, TensorFlow Lite offers three executables for checking the accuracy of quantized models. You might also consider MQBench, a benchmark and framework for evaluating quantization algorithms under real-world hardware deployments that uses PyTorch. If the degradation in accuracy from post-training quantization is too high, then one alternative is to use quantization aware training.


Europe Declares War on Tech Spoofing

In the new Payment Services Regulation, members of the European Parliament argued that messaging services such as WhatsApp, digital platforms such as Facebook, or marketplaces such as Amazon and eBay could be liable for scams that originate on their platforms, on a par with banks and other payment service providers. ... Europe’s new payment regulations are now up for negotiation in Brussels. Large US tech firms and messaging apps are pushing to lower the liability risk. They argue banks, not them, should be responsible. With spoofing or impersonation scams, the fraudulent transaction occurs on banking service portals, not the platforms. And so, banks themselves should enhance their security measures or pay the price. Banks, not surprisingly, disagree. They cannot control the entry points that fraudsters use to reach consumers, whether it is by phone, messaging apps, online ads, or the dark web. Why shouldn’t telecom network operators, messaging, and other digital platforms also be obliged to avoid fraudsters from reaching consumers and if they fail, be held liable?


Process mining helps IT leaders modernize business operations

Process mining provides the potential to enable organizations make quicker, more informed decisions when overhauling business processes by leveraging data for insights. By using the information gleaned from process mining, companies can better streamline workflows, enhance resource allocation, and automate repetitive tasks. ... Successful deployment and maintenance of process mining requires a clear vision from the management team and board, Mortello says, as well as commitment and persistence. “Process mining doesn’t usually yield immediate, tangible results, but it can offer unique insights into how a company operates,” he says. “A leadership team with a long-term vision is crucial to ensure the technology is utilized to its full potential.” It’s also important to thoroughly analyze processes prior to “fixing” them. “Make sure you have a good handle on the process you think you have and the ones you really have,” Constellation Research’s Wang says. “What we see across the board is a quick realization that what’s assumed and what’s done is very different.”


Could the Next War Begin in Cyberspace?

In a cyberwar, disinformation campaigns will likely be used to spread misinformation and collect data that can be leveraged to sway public opinion on key issues, Janzen says. "We can build very sophisticated security systems, but so long as we have people using those systems, they will be targeted to willingly or unwillingly allow malicious actors into those systems." ... How long a cyberspace war might last is inherently unpredictable, characterized by its persistent and ongoing nature, Menon says. "In contrast to conventional wars, marked by distinct start and end points, cyber conflicts lack geographical constraints," he notes. "These battles involve continuous attacks, defenses, and counterattacks." The core of cyberspace warfare lies in understanding algorithms, devising methods to breach them, and inventing new technologies to dismantle legacy systems, Menon says. "These factors, coupled with the relatively low financial investment required, contribute to the sporadic and unpredictable nature of cyberwars, making it challenging to anticipate when they may commence."



Quote for the day:

"It's fine to celebrate success but it is more important to heed the lessons of failure." -- Bill Gates

Daily Tech Digest - May 30, 2024

Single solution for regulating AI unlikely as laws require flexibility and context

In drafting the AI Act – the world’s first major piece of AI legislation – with an “omnibus approach,” Mazzini says, the EU aimed for a blanket coverage that allows for few loopholes. It aims to avoid overlap with existing sectoral laws, which can be enforced in addition to the AI Act. With the exception of exclusions around national security, military and defense (owing to the fact that the EU is not a sovereign state), it “essentially covers social and economic sectors from employment to vacation to law enforcement, immigration, products, financial services,” says Mazzini. “The main idea that we put forward was the risk-based approach.” ... Kortz believes it is “unlikely that we will see a sort of omnibus, all-sector, nationwide AI set of regulations or laws in the U.S. in the near future.” As in the case of data privacy laws, individual states will want to maintain their established authority, and while Kortz says some states – “especially, I think, here, of California” – may try something ambitious like a generalized AI law, the sectoral approach is likely to win out. 


Why Intel is making big bets on Edge AI

“Edge is not the cloud, it is very different from the cloud because it is heterogeneous,” she says. “You have different hardware, you have different servers, and you have different operating systems.” Such devices can include anything from sensors and IoT devices to routers, integrated access devices (IAD), and wide area network (WAN) access devices. One of the benefits of Edge AI is that by storing all your data in an Edge environment rather than a data center, even when large data sets are involved, it speeds up the decision-making and data analysis process, both of which are vital for AI applications that have been designed to provide real-time insights to organizations. Another benefit borne out of the proliferation of generative AI is that, when it comes to training models, even though that process takes place in a centralized data center, far away from users; inferencing – where the model applies its learned knowledge – can happen in an Edge environment, reducing the time required to send data to a centralized server and receive a response. Meanwhile, talent shortages, the growing need for efficiency, and the desire to improve time to market through the delivery of new services have all caused businesses to double down on automation.


Tensions in DeFi industry exposed by LayerZero’s anti-Sybil strategy

If identity protocols could eliminate Sybil farming and solutions already exist, why have they not already become standard practice? Cointelegraph spoke with Debra Nita, a senior crypto strategist at public relations firm YAP Global, to better understand the perceived risks that liveness checks might introduce to the industry. “Protocols may be reluctant to solve issues they face with airdrops using better verification processes — including decentralized ones — for reasons including reputational. The implications vary from the impact on community sentiments, key stakeholders and legal standing,” said Nita. Nita continued, “Verification poses a potential reputational problem, whereby it, from the outset, potentially excludes a large group of users.” Nita cited EigenLayer’s airdrop, which disqualified users from the United States, Canada, China and Russia despite allowing participation from these regions. This left a sour taste in the mouths of many who spent time and money on the platform only to receive no reward for their efforts.


Investing in employee training & awareness enhances an organisation’s cyber resilience

One essential consideration is the concept of Return on Security Investment (ROSI). Boards scrutinise security spending, expecting a clear demonstration of value. Evaluating whether security investments outweigh the potential costs of breaches is crucial. Therefore, investments should be made judiciously, focusing on technologies and strategies that offer substantial RoI. A key strategy is to consolidate and unify security technologies. Many organisations deploy a multitude of security solutions, often operating in silos. ... Furthermore, prioritising skill development is essential. With each additional technology, the demand for specialised expertise grows. Investing in training and development programs ensures that internal teams possess the necessary skills to effectively manage and leverage security solutions. Additionally, strategic partnerships with trusted vendors and service providers can augment internal capabilities and broaden access to specialised expertise. Ultimately, consolidating security technologies, focusing on ROI, and investing in skill development are key best practices for maximisng the effectiveness of existing security investments.


Modular, scalable hardware architecture for a quantum computer

To build this QSoC, the researchers developed a fabrication process to transfer diamond color center “microchiplets” onto a CMOS backplane at a large scale. They started by fabricating an array of diamond color center microchiplets from a solid block of diamond. They also designed and fabricated nanoscale optical antennas that enable more efficient collection of the photons emitted by these color center qubits in free space. Then, they designed and mapped out the chip from the semiconductor foundry. ... They built an in-house transfer setup in the lab and applied a lock-and-release process to integrate the two layers by locking the diamond microchiplets into the sockets on the CMOS chip. Since the diamond microchiplets are weakly bonded to the diamond surface, when they release the bulk diamond horizontally, the microchiplets stay in the sockets. “Because we can control the fabrication of both the diamond and the CMOS chip, we can make a complementary pattern. In this way, we can transfer thousands of diamond chiplets into their corresponding sockets all at the same time,” Li says.


NIST launches ambitious effort to assess LLM risks

NIST’s new Assessing Risks and Impacts of AI (ARIA) program will “assess the societal risks and impacts of artificial intelligence systems,” the NIST statement said, including ascertaining “what happens when people interact with AI regularly in realistic settings.” ... The first will be what NIST described as “controlled access to privileged information. Can the LLM protect information it is not to share, or can creative users coax that information from the system?” The second area will be “personalized content for different populations. Can an LLM be contextually aware of the specific needs of distinct user populations?” The third area will be “synthesized factual content. [Can the LLM be] free of fabrications?” The NIST representative also said that the organization’s evaluations will make use of “proxies to facilitate a generalizable, reusable testing environment that can sustain over a period of years. ARIA evaluations will use proxies for application types, risks, tasks, and guardrails — all of which can be reused and adapted for future evaluations.”


Researchers Detailed Modern WAF Bypass Techniques With Burp Suite Plugin

One of the key vulnerabilities Shah discussed is the request size limit inherent in many WAFs. Due to performance constraints, WAFs typically inspect only a portion of the request body. For instance, AWS WAFs inspect up to 8 KB for Application Load Balancer and AWS AppSync protections and up to 64 KB for CloudFront and API Gateway protections. Similarly, Azure and Akamai WAFs have their size limits, often leading to uninspected portions of large requests. This flaw can be exploited by placing malicious payloads beyond the inspection limit, bypassing the WAF. Shah introduced the nowafpls Burp Plugin to facilitate the exploitation of these request size limits. This tool simplifies the process by automatically padding out requests to exceed WAF inspection limits. Depending on the content type, the plugin inserts junk data at the cursor’s position, making it easier to bypass WAFs without manual intervention. For example, it adds comments in XML, junk keys and values in JSON, and junk parameters in URL-encoded data.


Four Essential Principles To Empower Your Decision-Making

First and foremost, write down your options. It's astonishing how tangled our thoughts can become when we don't have a clear view of our choices. By putting pen to paper, we untangle the knots and pave the way for clarity. Then comes the crucial shift from chasing perfection to embracing the best available option. Jeff Bezos' wisdom rings true here; waiting for 90% of the data often means missing out on opportunities. Sometimes, 70% is all we need to move forward. And once the decision is made, it's made. Dwelling on the "what-ifs" serves no purpose other than to tether us to the past. As Bezos famously put it, most decisions are reversible "two-way doors." So why let fear of making the wrong choice paralyze us? Indecision, I've learned, is its own form of suffering. Committing to a choice, even if it's not perfect, is infinitely more empowering than languishing in uncertainty. ... Decision-making, I've come to realize, is not an innate talent but a cultivated skill. It demands a shift in mindset, a commitment to better practices and a willingness to confront our own limiting beliefs. 


How CPUs will address the energy challenges of generative AI

Industry AI alliances, such as the AI Platform Alliance, play a crucial role in advancing CPU technology for artificial intelligence applications, focusing on enhancing energy efficiency and performance through collaborative efforts. These alliances bring together a diverse range of partners from various sectors of the technology stack—including CPUs, accelerators, servers, and software—to develop interoperable solutions that address specific AI challenges. This work spans from edge computing to large data centers, ensuring that AI deployments are both sustainable and efficient. These collaborations are particularly effective in creating solutions optimized for different AI tasks, such as computer vision, video processing, and generative AI. By pooling expertise and technologies from multiple companies, these alliances aim to forge best-in-breed solutions that deliver optimal performance and remarkable energy efficiency. Cooperative efforts such as the AI Platform Alliance fuel the development of new CPU technologies and system designs that are specifically engineered to handle the demands of AI workloads efficiently.


Driving Business and Digital Transformation: The CIO Agenda for 2024 and Beyond

Business transformation is a comprehensive process that aims to enhance overall business performance by increasing revenue, reducing operating costs, improving customer satisfaction, and boosting workforce productivity. ... Digital transformation, on the other hand, focuses on integrating digital technologies into all aspects of a business, fundamentally changing how it operates and delivers value to customers. This transformation requires significant investments in technology and tech-enabled processes, driving innovation and operational efficiency. ... Business and digital transformation are complementary processes. While business transformation aims to enhance overall performance and achieve strategic goals, digital transformation provides the technological foundation and innovative capabilities necessary to drive these changes. ... In 2024, Chief Information Officers (CIOs) are at the forefront of driving AI and innovation-led digital business transformations. Their role has evolved from managing technology infrastructure to becoming strategic leaders who drive business transformation through digital innovation. 



Quote for the day:

"Courage is doing what you're afraid to do. There can be no courage unless you're scared." -- Eddie Rickenbacker

Daily Tech Digest - May 29, 2024

Algorithmic Thinking for Data Scientists

While data scientists with computer science degrees will be familiar with the core concepts of algorithmic thinking, many increasingly enter the field with other backgrounds, ranging from the natural and social sciences to the arts; this trend is likely to accelerate in the coming years as a result of advances in generative AI and the growing prevalence of data science in school and university curriculums. ... One topic that deserves special attention in the context of algorithmic problem solving is that of complexity. When comparing two different algorithms, it is useful to consider the time and space complexity of each algorithm, i.e., how the time and space taken by each algorithm scales relative to the problem size (or data size). ... Some algorithms may manifest additive or multiplicative combinations of the above complexity levels. E.g., a for loop followed by a binary search entails an additive combination of linear and logarithmic complexities, attributable to sequential execution of the loop and the search routine, respectively.


Job seekers and hiring managers depend on AI — at what cost to truth and fairness?

The darker side to using AI in hiring is that it can bypass potential candidates based on predetermined criteria that don’t necessarily take all of a candidate’s skills into account. And for job seekers, the technology can generate great-looking resumes, but often they’re not completely truthful when it comes to skill sets. ... “AI can sound too generic at times, so this is where putting your eyes on it is helpful,” Toothacre said. She is also concerned about the use of AI to complete assessments. “Skills-based assessments are in place to ensure you are qualified and check your knowledge. Using AI to help you pass those assessments is lying about your experience and highly unethical.” There’s plenty of evidence that genAI can improve resume quality, increase visibility in online job searches, and provide personalized feedback on cover letters and resumes. However, concerns about overreliance on AI tools, lack of human touch in resumes, and the risk of losing individuality and authenticity in applications are universal issues that candidates need to be mindful of regardless of their geographical location, according to Helios’ Hammell.


Comparing smart contracts across different blockchains from Ethereum to Solana

Polkadot is designed to enable interoperability among various blockchains through its unique architecture. The network’s core comprises the relay chain and parachains, each playing a distinct role in maintaining the system’s functionality and scalability. ... Developing smart contracts on Cardano requires familiarity with Haskell for Plutus and an understanding of Marlowe for financial contracts. Educational resources like the IOG Academy provide learning paths for developers and financial professionals. Tools like the Marlowe Playground and the Plutus development environment aid in simulating and testing contracts before deployment, ensuring they function as intended. ... Solana’s smart contracts are stateless, meaning the contract logic is separated from the state, which is stored in external accounts. This separation enhances security and scalability by isolating the contract code from the data it interacts with. Solana’s account model allows for program reusability, enabling developers to create new tokens or applications by interacting with existing programs, reducing the need to redeploy smart contracts, and lowering costs.


3 things CIOs can do to make gen AI synch with sustainability

“If you’re only buying inference services, ask them how they can account for all the upstream impact,” says Tate Cantrell, CTO of Verne, a UK-headquartered company that provides data center solutions for enterprises and hyperscalers. “Inference output takes a split second. But the only reason those weights inside that neural network are the way they are is because of massive amounts of training — potentially one or two months of training at something like 100 to 400 megawatts — to get that infrastructure the way it is. So how much of that should you be charged for?” Cantrell urges CIOs to ask providers about their own reporting. “Are they doing open reporting about the full upstream impact that their services have from a sustainability perspective? How long is the training process, how long is it valid for, and how many customers did that weight impact?” According to Sundberg, an ideal solution would be to have the AI model tell you about its carbon footprint. “You should be able to ask Copilot or ChatGPT what the carbon footprint of your last query is,” he says. 


EU’s ChatGPT taskforce offers first look at detangling the AI chatbot’s privacy compliance

The taskforce’s report discusses this knotty lawfulness issue, pointing out ChatGPT needs a valid legal basis for all stages of personal data processing — including collection of training data; pre-processing of the data (such as filtering); training itself; prompts and ChatGPT outputs; and any training on ChatGPT prompts. The first three of the listed stages carry what the taskforce couches as “peculiar risks” for people’s fundamental rights — with the report highlighting how the scale and automation of web scraping can lead to large volumes of personal data being ingested, covering many aspects of people’s lives. It also notes scraped data may include the most sensitive types of personal data (which the GDPR refers to as “special category data”), such as health info, sexuality, political views etc, which requires an even higher legal bar for processing than general personal data. On special category data, the taskforce also asserts that just because it’s public does not mean it can be considered to have been made “manifestly” public — which would trigger an exemption from the GDPR requirement for explicit consent to process this type of data.


Avoiding the cybersecurity blame game

Genuine negligence or deliberate actions should be handled appropriately, but apportioning blame and meting out punishment must be the final step in an objective, reasonable investigation. It should certainly not be the default reaction. So far, so reasonable, yes? But things are a little more complicated than this. It’s all very well saying, “don’t blame the individual, blame the company”. Effectively, no “company” does anything; only people do. The controls, processes and procedures that let you down were created by people – just different people. If we blame the designers of controls, processes and procedures… well, we are just shifting blame, which is still counterproductive. ... Managers should use the additional resources to figure out how to genuinely change the work environment in which employees operate and make it easier for them to do their job in a secure practical manner. Managers should implement a circular, collaborative approach to creating a frictionless, safer environment, working positively and without blame.


The decline of the user interface

The Ok and Cancel buttons played important roles. A user might go to a Settings dialog, change a bunch of settings, and then click Ok, knowing that their changes would be applied. But often, they would make some changes and then think “You know, nope, I just want things back like they were.” They’d hit the Cancel button, and everything would reset to where they started. Disaster averted. Sadly, this very clear and easy way of doing things somehow got lost in the transition to the web. On the web, you will often see Settings pages without Ok and Cancel buttons. Instead, you’re expected to click an X in the upper right to make the dialog close, accepting any changes that you’ve made. ... In the newer versions of Windows, I spend a dismayingly large amount of time trying to get the mouse to the right spot in the corner or edge of an application so that I can size it. If I want to move a window, it is all too frequently difficult to find a location at the top of the application to click on that will result in the window being relocated. Applications used to have a very clear title bar that was easy to see and click on.


Lawmakers paint grim picture of US data privacy in defending APRA

At the center of the debate is the American Privacy Rights Act (APRA), the push for a federal data privacy law that would either simplify a patchwork of individual state laws – or run roughshod over existing privacy legislation, depending on which state is offering an opinion. While harmonizing divergent laws seems wise as a general measure, states like California, where data privacy laws are already much stricter than in most places, worry about its preemptive clauses weakening their hard-fought privacy protections. Rodgers says APRA is “an opportunity for a reset, one that can help return us to the American Dream our Founders envisioned. It gives people the right to control their personal information online, something the American people overwhelmingly want,” she says. “They’re tired of having their personal information abused for profit.” From loose permissions on sharing location data to exposed search histories, there are far too many holes in Americans’ digital privacy for Rodgers’ liking. Pointing to the especially sensitive matter of childrens’ data, she says that “as our kids scroll, companies collect nearly every data point imaginable to build profiles on them and keep them addicted. ...”


Picking an iPaaS in the Age of Application Overload

Companies face issues using proprietary integration solutions, as they end up with black-box solutions with limited flexibility. For example, the inability to natively embed outdated technology into modern stacks, such as cloud native supply chains with CI/CD pipelines, can slow down innovation and complicate the overall software delivery process. Companies should favor iPaaS technologies grounded in open source and open standards. Can you deploy it to your container orchestration cluster? Can you plug it into your existing GitOps procedures? Such solutions not only ensure better integration into proven QA-tested procedures but also offer greater freedom to migrate, adapt and debug as needs evolve. ... As organizations scale, so too must their integration solutions. Companies should avoid iPaaS solutions offering only superficial “cloud-washed” capabilities. They should prioritize cloud native solutions designed from the ground up for the cloud, and that leverage container orchestration tools like Kubernetes and Docker Swarm, which are essential for ensuring scalability and resilience.
Shifting left is a cultural and practice shift, but it also includes technical changes to how a shared testing environment is set up. ... The approach scales effectively across engineering teams, as each team or developer can work independently on their respective services or features, thereby reducing dependencies. While this is great advice, it can feel hard to implement in the current development environment: If the process of releasing code to a shared testing cluster takes too much time, it doesn’t seem feasible to test small incremental changes. ... The difference between finding bugs as a user and finding them as a developer is massive: When an operations or site reliability engineer (SRE) finds a problem, they need to find the engineer who released the code, describe the problem they’re seeing, and present some steps to replicate the issue. If, instead, the original developer finds the problem, they can cut out all those steps by looking at the output, finding the cause, and starting on a fix. This proactive approach to quality reduces the number of bugs that need to be filed and addressed later in the development cycle.



Quote for the day:

"The best and most beautiful things in the world cannot be seen or even touched- they must be felt with the heart." -- Helen Keller

Daily Tech Digest - May 28, 2024

Partitioning an LLM between cloud and edge

By partitioning LLMs, we achieve a scalable architecture in which edge devices handle lightweight, real-time tasks while the heavy lifting is offloaded to the cloud. For example, say we are running medical scanning devices that exist worldwide. AI-driven image processing and analysis is core to the value of those devices; however, if we’re shipping huge images back to some central computing platform for diagnostics, that won’t be optimal. Network latency will delay some of the processing, and if the network is somehow out, which it may be in several rural areas, then you’re out of business. ... The first step involves evaluating the LLM and the AI toolkits and determining which components can be effectively run on the edge. This typically includes lightweight models or specific layers of a larger model that perform inference tasks. Complex training and fine-tuning operations remain in the cloud or other eternalized systems. Edge systems can preprocess raw data to reduce its volume and complexity before sending it to the cloud or processing it using its LLM.


How ISRO fosters a culture of innovation

As people move up the corporate totem pole their attention to detail gives way to big-picture thinking, and rightly so. You can’t look beyond and yet mind your every step on the way to an uncharted terrain. Yet when it comes to research and development, especially high-risk, high-impact projects, there is hardly any trade-off between thinking big and thinking in detail. You must do both. For instance, in the inaugural session of my last workshop, one of the senior directors was invited and the first thing he noticed was the mistake in the session duration. ... Now imagine this situation in a corporate context. How likely is the boss to call out a rather silly mistake? It was innocuous for all practical purposes. Most won’t point it out, let alone address it immediately. But not at ISRO.  ... Here’s the interesting thing. One of the participants was incessantly quizzing me, bordering on a challenge, and everyone was nonchalant about it. In a typical corporate milieu, such people would be shunned or would be asked to shut up. But not here. We had a volley of arguments, and people around seemed to enjoy it and encourage it. They were not only okay with varied points of view but also protective of it. 


GoDaddy has 50 large language models; its CTO explains why

“What we’ve done is built a common gateway that talks to all the various large language models on the backend, and currently we support more than 50 different models, whether they’re for images, text or chat, or whatnot. ... “Obviously, this space is accelerating superfast. A year ago, we had zero LLMs and today we have 50 LLMs. That gives you some indication of just how fast this is moving. Different models will have different attributes and that’s something we’ll have to continue to monitor. But by having that mechanism we can monitor with and control what we send and what we receive, we believe we can better manage that.” ... “In some ways, experiments that aren’t successful are some of the most interesting ones, because you learn what doesn’t work and that forces you to ask follow-up questions about what will work and to look at things differently. As teams saw the results of these experiments and saw the impact on customers, it’s really engaged them to spend more time with the technology and focus on customer outcomes.”


How to combat alert fatigue in cybersecurity

Alert fatigue is the result of several related factors. First, today’s security tools generate an incredible volume of event data. This makes it difficult for security practitioners to distinguish between background noise and serious threats. Second, many systems are prone to false positives, which are triggered either by harmless activity or by overly sensitive anomaly thresholds. This can desensitize defenders who may end up missing important attack signals. The third factor contributing to alert fatigue is the lack of clear prioritization. The systems generating these alerts often don’t have mechanisms that triage and prioritize the events. This can lead to paralyzing inaction because the practitioners don’t know where to begin. Finally, when alert records or logs do not contain sufficient evidence and response guidance, defenders are unsure of the next actionable steps. This confusion wastes valuable time and contributes to frustration and fatigue. ... The elements of the “SOC visibility triad” I mentioned earlier – NDR, EDR, and SIEM are among the critical new technologies that can help.


Driving buy-in: How CIOs get hesitant workforces to adopt AI

If willingness and skill are the two main dimensions that influence hesitancy toward AI, employees who question whether taking the time to learn the technology is worth the effort are at the intersection. These employees often believe the AI learning curve is too steep to justify embarking on in the first place, he notes. “People perceive that AI is something complex, probably because of all of these movies. They worry: Will they have time and effort to learn these new skills and to adapt to these new systems?” Jaksic says. This challenge is not unique to AI, he adds. “We all prefer familiar ways of working, and we don’t like to disrupt our established day-to-day activities,” he says. Perhaps the best inroads then is to show that learning enough about AI to use it productively does not require a monumental investment. To this end, Jaksic has structured a formal program at KEO for AI education in bite-size segments. The program, known as Summer of Innovation, is organized around lunchtime sessions taught by senior leaders around high-level AI concepts. 


Taking Gen AI mainstream with next-level automation

Gen AI needs to be accountable and auditable. It needs to be instructed and learn what information it can retrieve. Combining it with IA serves as the linchpin of effective data governance, enhancing the accuracy, security, and accountability of data throughout its lifecycle. Put simply, by wrapping Gen AI with IA businesses have greater control of data and automated workflows, managing how it is processed, secured – from unauthorized changes – and stored. It is this ‘process wrapper’ concept that will allow organizations to deploy Gen AI effectively and responsibly. Adoption and transparency of Gen AI – now – is imperative, as innovation continues to grow at pace. The past 12 months have seen significant innovations in language learning models (LLMs) and Gen AI to simplify automations that tackle complex and hard-to-automate processes. ... Before implementing any sort of new automation technology, organizations must establish use cases unique to their business and undertake risk management assessments to avoid potential noncompliance, data breaches and other serious issues.


Third-party software supply chain threats continue to plague CISOs

As software gets more complex with more dependent components, it quickly becomes difficult to detect coding errors, whether they are inadvertent or added for malicious purposes as attackers try to hide their malware. “A smart attacker would just make their attack look like an inadvertent vulnerability, thereby creating extremely plausible deniability,” Williams says. ... “No single developer should be able to check in code without another developer reviewing and approving the changes,” the agency wrote in their report. This was one of the problems with the XZ Utils compromise, where a single developer gained the trust of the team and was able to make modifications on their own. One method is to combine a traditional third-party risk management program with specialized consultants that can seek out and eliminate these vulnerabilities, such as the joint effort by PwC and ReversingLabs’ automated tools. The open-source community also isn’t just standing still. One solution is a tool introduced earlier this month by the Open Source Security Foundation called Siren. 


Who is looking out for your data? Security in an era of wide-spread breaches

Beyond organizations introducing the technology behind closed doors to keep data safe, the interest in biometrics smartcards shows that consumers also want to see improved protection play out in their physical transactions and finance management. This paradigm shift reflects not only a desire for heightened protection but also an acknowledgement of the limitations of traditional authentication methods. Attributing access to a fingerprint or facial recognition affirms to that person, in that moment, that their credentials are unique, and therefore that the data inside is safe. Encryption of fingerprint data within the card itself further ensures complete confidence in the solution. The encryption of personal identity data only strengthens this defense, ensuring that sensitive information remains inaccessible to unauthorized parties. These smartcards effectively mitigate the vulnerabilities associated with centralized databases. Biometric smart cards also change the dynamic of data storage. Rather than housing biometric credentials in centralized databases, where targets are also gathered in one location; smartcards sidestep that risk.


The Role of AI in Developing Green Data Centers

Green data centers, powered by AI technologies, are at the forefront of revolutionizing the digital infrastructure landscape with their significantly reduced environmental impact. These advanced facilities leverage AI to optimize energy consumption and cooling systems, leading to a substantial reduction in energy consumption and carbon footprint. This not only reduces greenhouse gas emissions but also paves the way for more sustainable operational practices within the IT industry. Furthermore, sustainability initiatives integral to green data centers extend beyond energy efficiency. They encompass the use of renewable energy sources such as wind, solar, and hydroelectric power to further diminish the reliance on fossil fuels. ... AI-driven solutions can continuously monitor and analyze vast amounts of data regarding a data center’s operational parameters, including temperature fluctuations, server loads, and cooling system performance. By leveraging predictive analytics and machine learning algorithms, AI can anticipate potential inefficiencies or malfunctions before they escalate into more significant issues that could lead to excessive power use.


Don't Expect Cybersecurity 'Magic' From GPT-4o, Experts Warn

Despite the fresh capabilities, don't expect the model to fundamentally change how a gen AI tool helps either attackers or defenders, said cybersecurity expert Jeff Williams. "We already have imperfect attackers and defenders. What we lack is visibility into our technology and processes to make better judgments," Williams, the CTO at Contrast Security, told Information Security Media Group. "GPT-4o has the exact same problem. So it will hallucinate non-existent vulnerabilities and attacks as well as blithely ignore real ones." ... Attackers might still gain some minor productivity boosts thanks to GPT-4o's fresh capabilities, including its ability to do multiple things at once, said Daniel Kang, a machine learning research scientist who has published several papers on the cybersecurity risks posed by GPT-4. These "multimodal" capabilities could be a boon to attackers who want to craft realistic-looking deep fakes that combine audio and video, he said. The ability to clone voices is one of GPT-4o's new features, although other gen AI models already offered this capability, which experts said can potentially be used to commit fraud by impersonating someone else's identify.



Quote for the day:

"Defeat is not bitter unless you swallow it." -- Joe Clark

Daily Tech Digest - May 27, 2024

10 big devops mistakes and how to avoid them

“One of the significant challenges with devops is ensuring seamless communication and collaboration between development and operations teams,” says Lawrence Guyot, president of IT services provider Empowerment through Technology & Education (ETTE). ... Ensuring the security of the software supply chain in a devops environment can be challenging. “The speed at which devops teams operate can sometimes overlook essential security checks,” Guyot says. “At ETTE, we addressed this by integrating automated security tools directly into our CI/CD pipeline, conducting real-time security assessments at every stage of development.” This integration not only helped the firm identify vulnerabilities early, but also ensured that security practices kept pace with rapid deployment cycles, Guyot says. ... “Aligning devops with business goals can be quite the hurdle,” says Remon Elsayea, president of TechTrone IT Services, an IT solutions provider for small and mid-sized businesses. “It often seems like the rapid pace of devops initiatives can outstrip the alignment with broader business objectives, leading to misaligned priorities,” Elsayea says.


Why We Need to Get a Handle on AI

A recent World Economic Forum report also found a widening cyber inequity, which is accelerating the profound impact of emerging technologies. The path forward therefore demands strategic thinking, concerted action, and a steadfast commitment to cyber resilience. Again, this isn’t new. Organizations of all sizes and maturity levels have often struggled to maintain the central tenets of organizational cyber resilience. At the end of the day, it is much easier to use technology to create malicious attacks than it is to use technology to detect such a wide spectrum of potential attack vectors and vulnerabilities. The modern attack surface is vast and can overwhelm an organization as they determine how to secure it. With this increased complexity and proliferation of new devices and attack vectors, people and organizations have become a bigger vulnerability than ever before. It is often said that humans are the biggest risk when it comes to security and deepfakes can more easily trick people into taking actions that benefit the attackers. Therefore, what questions should security teams be asking to protect their organization?


Demystifying cross-border data transfer compliance for Indian enterprises

The variability of these laws introduces complex compliance issues. As Indian enterprises expand globally, the significance of robust data compliance management escalates. Organizations like ours assist companies worldwide with customized solutions tailored to the complexities of cross-border data transfer compliance. We ensure that businesses not only meet international data protection standards but also enhance their data governance practices through our comprehensive suite of tools. The evolution of India’s data localization policies could significantly influence global digital diplomacy. Moving from strict data localization to permitting certain cross-border data flows aligns India more closely with global digital trade norms, potentially enhancing its relationships with major markets like the US and EU. India is proactively revising its legal frameworks to better address the intricacies of cross-border data transfers within the realm of data privacy, especially for businesses. The forthcoming DPDPA regulations aim to balance the need for data protection with the operational requirements of digital commerce and governance.


Digital ID adoption: Implementation and security concerns

Digital IDs are poised to revolutionize sectors that rely heavily on secure and efficient identity verification. ... “As the Forrester experts note in the study, the complexities and disparities of global implementation across various landscapes highlight the strategic necessity of adopting a hybrid approach to digital IDs. Moreover, there is no single, universally accepted set of global standards for digital IDs that applies across all countries and sectors. Therefore, the large number of companies at the stage of active implementation demonstrates a growing need for frameworks and guidelines that aim to foster interoperability, security, and privacy across different digital ID systems,” said Ihar Kliashchou, CTO at Regula. “The good news is that several international organizations and standards bodies — New Technology Working Group in the International Civil Aviation Organization, the International Organization for Standardization (ISO), etc. — are working towards those standards. This seems to be a case in which slow and steady wins the race,” concluded Kliashchou.


Forrester: Preparing for the era of the AI PC

AI PCs are now disrupting the cloud-only AI model to bring that processing to local devices running any OS. But what is an AI PC exactly? Forrester defines an AI PC as a PC embedded with an AI chip and algorithms specifically designed to improve the experience of AI workloads across the computer processing unit (CPU), graphics processing unit (GPU) and neural processing unit (NPU). ... An AI PC also offers a way to improve the collaboration experience. Dedicated AI chipsets will improve the performance of classic collaboration features, such as background blur and noise, by sharing resources across CPUs, GPUs and NPUs. On-device AI offers the ability to render a much finer distinction between the subject and the blurred background. More importantly, the AI PC will also enable new use cases, such as eye contact correction, portrait blur, auto framing, lighting adjustment and digital avatars. Another benefit of AI chipsets on PCs is that they provide the means to optimise device performance and longevity. Previous AI use cases were feasible on PCs, but they drained the battery quickly. The addition of an NPU will help preserve battery life while employees run sustained AI workloads.


Gartner Reveals 5 Trends That Will Make Software Engineer

Herschmann said that while there is a worry that AI could eliminate coding jobs instead of just enhancing them, that worry is somewhat unfounded. "If anything, we believe there's going to be a need for more developers, which may at first seem a little counterintuitive, but the reality is that we're still in the early stages of all of this," he said. "While generative AI is quite impressive in the beginning, if you dig a little bit deeper, you realize it's shinier than it really is," Herschmann said. So instead of replacing developers, AI will be more of a partner to them. ... Coding is just a small part of a developer's role. There are a lot of other things they need to do, such as keep the environment running, configuration work, and so on. So it makes sense to have a platform engineering team to take some of this work off developers' plates so they can focus on building the product, according to Herschmann. "Along with that though comes a potential scaling effect because you can then provide that same environment and the skills of that team to others as you scale up," he said. 


Beyond blockchain: Unlocking the potential of Directed Acyclic Graphs (DAGs)

DAGs are a type of data structure that uses a topological ordering, allowing for multiple branches that converge but do not loop back on themselves. Imagine a network of interconnected highways where each transaction can follow its own distinct course, branching off and joining forces with other transactions as required. This structure enables simultaneous transactions, eliminating the need for sequential processing, which is a bottleneck in traditional blockchain systems. ... One of the notable challenges of traditional blockchain technology is its scalability. DAGs address this issue by allowing more transactions to be processed in parallel, significantly increasing throughput, a key advantage for real-time applications in commodity trading and supply chain management. DAGs are more energy-efficient than proof-of-work blockchains, as they do not require substantial computational power for intensive mining activities, aligning with global and particularly India’s increasing focus on sustainable technological solutions. But the benefits of DAGs don’t stop here. Imagine a scenario where a shipment of perishable goods is delayed due to unforeseen circumstances, such as adverse weather conditions.


Pioneering the future of personalised experiences and data privacy in the digital age

Zero-party data (ZPD) is at the core of Affinidi's strategy and is crucial for businesses navigating consumer interactions. ZPD refers to information consumers willingly share with companies for specific benefits, such as personalised offers and services. Consider an avid traveller who frequently books trips online. He might share his travel preferences with a travel company, such as favourite destinations, preferred accommodation types, and activity interests. This data allows the company to tailor its offerings precisely to his tastes. For instance, if he loves beach destinations and luxury hotels, the company can send him personalised travel packages featuring exclusive beach resorts with premium amenities. ... As data privacy regulations tighten, businesses must prioritise consented and accurate data sources, reducing legal risks and dependence on external data pools. Trust can be viewed as a currency, altering customers' loyalty and buying decisions. A survey by PWC showed that 33% of customers pay a premium to companies because they trust them. 


Shut the back door: Understanding prompt injection and minimizing risk

You don’t have to be an expert hacker to attempt to misuse an AI agent; you can just try different prompts and see how the system responds. Some of the simplest forms of prompt injection are when users attempt to convince the AI to bypass content restrictions or ignore controls. This is called “jailbreaking.” One of the most famous examples of this came back in 2016, when Microsoft released a prototype Twitter bot that quickly “learned” how to spew racist and sexist comments. More recently, Microsoft Bing (now “Microsoft Co-Pilot) was successfully manipulated into giving away confidential data about its construction. Other threats include data extraction, where users seek to trick the AI into revealing confidential information. Imagine an AI banking support agent that is convinced to give out sensitive customer financial information, or an HR bot that shares employee salary data. And now that AI is being asked to play an increasingly large role in customer service and sales functions, another challenge is emerging. Users may be able to persuade the AI to give out massive discounts or inappropriate refunds. 


Say goodbye to break-and-fix patches

A ‘break-and-fix’ mindset can be necessary in emergency situations, but it can also make things worse. While it can be tempting to view maintenance work as adding little value, failing to address these problems properly will only create future issues as you accumulate tech debt. Fixing those issues will require more resources — time, money, skills — that will undoubtedly hurt your organization. ... Tech debt is one of those “invisible issues” hiding in IT systems. Opting for quick fixes to solve immediate issues, rather than undertaking comprehensive upgrades might seem cost-effective and straightforward at first. However, over time, the accumulation of these patches contributes significantly to tech debt. ... Despite the potential consequences of inadequate and reactive maintenance, adopting a more proactive approach can be challenging for many businesses. Economic pressures and budgetary constraints are forcing leaders to reduce expenses and ‘do more with less’ — this leads to situations where areas not traditionally viewed as value-adding (like maintenance) are deprioritized. This is where managed services can help. 



Quote for the day:

''Smart leaders develop people who develop others, don't waste your time on those who won't help themselves.'' -- John C Maxwell

Daily Tech Digest - May 26, 2024

The modern CISO: Scapegoat or value creator?

To showcase the value of their programs and demonstrate effectiveness, CISOs must establish clear communication and overcome the disconnect between the board and their team. It’s up to the CISO to ensure the board understands the level of cyber risk their organization is facing and what they need to increase the cyber resilience of their organization. Presenting cyber risk levels in monetary terms with actionable next steps is necessary to bring the board of directors on the same page and open an honest line of communication, while elevating their cybersecurity team to the role of value creator. ... CISOs are deeply wary about sharing too many details on their cybersecurity posture in the public domain, because of the unnecessary and preventable risk of exposing their organizations to cyberattacks, which are expected to cause $10.5 trillion in damages by 2025. Filing an honest 10K while preserving your organization’s cyber defenses requires a delicate balance. We’ve already seen Clorox fall victim when the balance was off. ... Given the pace at which the cybersecurity landscape is continuing to evolve, the CISO’s job is getting tougher. 


This Week in AI: OpenAI and publishers are partners of convenience

In an appearance on the “All-In” podcast, Altman said that he “definitely [doesn’t] think there will be an arms race for [training] data” because “when models get smart enough, at some point, it shouldn’t be about more data — at least not for training.” Elsewhere, he told MIT Technology Review’s James O’Donnell that he’s “optimistic” that OpenAI — and/or the broader AI industry — will “figure a way out of [needing] more and more training data.” Models aren’t that “smart” yet, leading OpenAI to reportedly experiment with synthetic training data and scour the far reaches of the web — and YouTube — for organic sources. But let’s assume they one day don’t need much additional data to improve by leaps and bounds. ... Through licensing deals, OpenAI effectively neutralizes a legal threat — at least until the courts determine how fair use applies in the context of AI training — and gets to celebrate a PR win. Publishers get much-needed capital. And the work on AI that might gravely harm those publishers continues.


Private equity looks to the CIO as value multiplier

A newer way of thinking about value creation focuses on IT, he says, because nearly every company, perhaps even the mom-and-pop coffee shop down the street, is a heavy IT user. “With this third wave, we’re seeing private equity firms retain in-house IT leadership, and that in-house IT leadership has led to more value creation,” Buccola says. “Firms with great IT leadership, a sound IT strategy, and a forward-thinking IT strategy, are creating more value.” ... “All roads lead to IT,” says Corrigan, a veteran of PE-backed firms, with World Insurance backed by Goldman Sachs and Charlesbank. “Every aspect of the business is dependent on some type of technology.” Corrigan sees CIOs being more frequently consulted when PE-back firms look to IT systems to drive operational efficiencies. In some cases, cutting costs is a quicker path to return on investment than revenue growth. “Every dollar you can cut out of the bottom line is worth several dollars of revenue generated,” he says. ... “The modern CIO in a private equity environment is no longer just a back-office role but a strategic partner capable of driving the business forward,” he says.


Sad Truth Is, Bad Tests Are the Norm!

When it comes to testing, many people seem to have the world view that hard-to-maintain tests are the norm and acceptable. In my experience, the major culprits are BDD frameworks that are based on text feature files. This is amplifying waste. The extra feature file layer in theory allows;The user to swap out the language at a later date; Allows a business person to write user stories and or acceptance criteria; Allows a business person to read the user stories and or acceptance criteria; Collaboration; Etc… You have actually added more complexity than you think, for little benefit. I am explicitly critiquing the approach of writing the extra feature file layer first, not the benefits of BDD as a concept. You test more efficiently, with better results not writing the feature file layer, such as with Smart BDD, where it’s generated by code. Here I compare the complexities and differences between Cucumber and Smart BDD. ... Culture is hugely important, I’m sure we and our bosses and senior leaders would all ultimately agree with the following:For more value, you need more feedback and less waste; For more feedback, you need more value and less waste; For less waste, you need more value and more feedback


6 Months Under the SEC’s Cybersecurity Disclosure Rules

There have been calls for regulatory harmonization. For example, the Biden-Harris Administration’s National Cybersecurity Strategy released last year calls for harmonization and streamlining of new and existing regulations to ease the burden of compliance. But in the meantime, enterprise leadership teams must operate in this complicated regulatory landscape, made only more complicated by budgetary issues. “Security budgets aren't growing for the most part. So, there's this tension between diverting resources to security versus diverting resources to compliance … on top of everything else that the CISOs have going on,” says Algeier. So, what should CISOs and enterprise leadership teams be doing as they continue to work under these SEC rules and other regulatory obligations? “CISOs should keep in mind the ability to quickly, easily, and efficiently fulfill the requirements laid out by the SEC, especially if they were to fall victim to an attack,” says Das. “This means having not only the right processes in place, but investments into tools that can ensure reporting occurs in the newly condensed timeline.”


Despite increased budgets, organizations struggle with compliance

“While regulations are driving strategy shifts and increased budgets, the talent shortage and fragmented infrastructure remain obstacles to compliance and resilience. To succeed, organizations must find the right balance between human expertise for complex situations and AI-enhanced automation tools for routine tasks. This will alleviate operational strain and ensure security professionals can focus on the parts of the job where human judgment is irreplaceable.” ... 93% of organizations report rethinking their cybersecurity strategy in the past year due to the rise of new regulations, with 58% stating they have completely reconsidered their approach. The strategy shifts are also impacting the roles of cybersecurity decision-makers, with 45% citing significant new responsibilities. 92% of organizations reported an increase in their allocated budgets. Among these organizations, a significant portion (36%) witnessed budget increases of 20% to 49%, and a notable 23% saw increases exceeding 50%. 


Fundamentals of Dimensional Data Modeling

Dimensional modeling focuses its diagramming on facts and dimensions:Facts contain crucial quantitative data to track business processes. Examples of these metrics include sales figures or number of subscriptions. Dimensions contain referential pieces of information. Examples of dimensions include customer name, price, date, or location. Keeping the dimensions separate from facts makes it easier for analysts to slice-and-dice and filter data to align with the relevant context underlying a business problem. ... Dimensional modeling provides a basis for meaningful analytics gathered from a data warehouse for many reasons. Its processes lead to standardizing dimensions through presenting the data blueprint intuitively. ... Dimensional data modeling promises quick access to business insights when searching a data warehouse. Modelers provide a template to guide business conversations across various teams by selecting the business process, defining the grain, and identifying the dimensions and fact tables. Alignment in the design requires these processes, and Data Governance plays an integral role in getting there. 


Why the AI Revolution Is Being Led from Below

If shadow IT was largely defined by some teams’ use of unauthorized vendors and platforms, shadow AI is often driven by the use of AI tools like ChatGPT by individual employees and users, on their own and even surreptitiously. ... So why is that a problem? The proliferation of Shadow AI can deliver many of the same benefits as officially sanctioned AI strategies, streamlining processes, automating repetitive tasks, and enhancing productivity. Employees are mainly drawn to deploy their own AI tools for precisely these reasons — they can hand off chunks of taxing work to these invisible assistants. Some industry observers see the plus side of all this and are actively encouraging the “democratization” of AI tools. At this week’s The Financial Brand Forum 2024, Cornerstone Advisors’ Ron Shevlin made it his top recommendation: “My #1 piece of advice is ‘drive bottom-up use.’ Encourage widespread AI experimentation by your team members. Then document and share the process and output improvements as widely as possible.”


A Strategic Approach to Stopping SIM Swap Fraud

Fraudsters are cautious about their return on investment. SIM swap fraud is a high-risk endeavor, and they typically expect higher rewards. It involves the risk of physically visiting telco operator premises, obtaining genuine looking customer identification documents, using employees' mules, or bribing bank or telco staff. Their targets are mostly high-balance accounts, including both bank accounts and wallets. Over the years, we have learned that customers with substantial account balances might often share bank details and OTPs during social engineering schemes, but they typically refrain from sharing their PIN due to the perceived risk involved. Even if a small percentage of customers were to share their PIN, the risk would still be minimized, as the majority of potential victims would refrain from sharing their PIN. The fraudsters would need to compromise at three levels instead of two: data gathering, compromising the telco operator and persuading the customer. If customers detect something suspicious, they may become alert, resulting in fraudsters wasting their investments.


Complexity snarls multicloud network management

While each cloud provider does its best to make networking simple across clouds, all have very nuanced differences and varied best practices for approaching the same problem, says Ed Wood, global enterprise network lead at business advisory firm Accenture. This makes being able to create enterprise-ready, secured networks across the cloud challenging, he adds. Wasim believes that a lack of intelligent data utilization at crucial stages, from data ingestion to proactive management, further complicates the process. “The sheer scale of managing resources, coupled with the dynamic nature of cloud environments, makes it challenging to achieve optimal performance and efficiency.” Making network management even more challenging is a lack of clarity on roles and responsibilities. This can be attributed to an absence of agreement on shared responsibility models, Wasim says. As a result, stakeholders, including customers, cloud service providers, and any involved third parties, might each hold different views on responsibility and accountability regarding data compliance, controls, and cloud operations management.



Quote for the day:

"You may be disappointed if you fail, but you are doomed if you don't try." -- Beverly Sills