Daily Tech Digest - June 27, 2024

Is AI killing freelance jobs?

Work that has previously been done by humans, such as copywriting and developing code, is being replicated by AI-powered tools like ChatGPT and Copilot, leading many workers to anticipate that these tools may well swipe their jobs out from under them. And one population appears to be especially vulnerable: freelancers. ... While writing and coding roles were the most heavily affected freelance positions, they weren’t the only ones. For instance, the researchers found a 17% decrease in postings related to image creation following the release of DALL-E. Of course, the study is limited by its short-term outlook. Still, the researchers found that the trend of replacing freelancers has only increased over time. After splitting their nine months of analysis into three-month segments, each progressive segment saw further declines in the number of freelance job openings. Zhu fears that the number of freelance opportunities will not rebound. “We can’t say much about the long-term impact, but as far as what we examined, this short-term substitution effect was going deeper and deeper, and the demands didn’t come back,” Zhu says.


Can data centers keep up with AI demands?

As the cloud market has matured, leaders have started to view their IT infrastructure through the lens of ‘cloud economics.’ This means studying the cost, business impact, and resource usage of a cloud IT platform in order to collaborate across departments and determine the value of cloud investments. It can be a particularly valuable process for companies looking to introduce and optimize AI workloads, as well as reduce energy consumption. ... As the demand for these technologies continues to grow, businesses need to prioritize environmental responsibility when adopting and integrating AI into their organizations. It is essential that companies understand the impact of their technology choices and take steps to minimize their carbon footprint. Investing in knowledge around the benefits of the cloud is also crucial for companies looking to transition to sustainable technologies. Tech leaders should educate themselves and their teams about how the cloud can help them achieve their business goals while also reducing their environmental impact. As newer technologies like AI continue to grow, companies must prepare for the best ways to handle workloads. 


Building a Bulletproof Disaster Recovery Plan

A lot of companies can't effectively recover because they haven't planned their tech stack around the need for data recovery, which should be central to core technology choices. When building a plan, companies should understand the different ways that applications across an organization’s infrastructure are going to fail and how to restore them. ... When developing the plan, prioritizing the key objectives and systems is crucial to ensure teams don't waste time on nonessential operations. Then, ensure that the right people understand these priorities by building out and training your incident response teams with clear roles and responsibilities. Determine who understands the infrastructure and what data needs to be prioritized. Finally, ensure they're available 24/7, including with emergency contacts and after-hours contact information. While storage backups are a critical part of disaster recovery, they should not be considered the entire plan. While essential for data restoration, they require meticulous planning regarding storage solutions, versioning, and the nuances of cold storage. 


How are business leaders responding to the AI revolution?

While AI provides a potential treasure trove of possibilities, particularly when it comes to effectively using data, business leaders must tread carefully when it comes to risks around data privacy and ethical implications. ‌While the advancements of generative AI have been consistently in the news, so too have the setbacks major tech companies are facing when it comes to data use. ... “Controls are critical,” he said. “Data privileges may need to be extended or expanded to get the full value across ecosystems. However, this brings inherent risks of unintentional data transmission and data not being used for the purpose intended, so organisations must ensure strong controls and platforms that can highlight and visualise anomalies that may require attention.” ... “Enterprises must be courageous around shutting down automation and AI models that while showing some short-term gain may cause commercial and reputational damage in the future if left unchecked.” He warned that a current skills shortage in the area of AI might hold businesses back. 


AI development on a Copilot+ PC? Not yet

Although the Copilot+ PC platform (and the associated Copilot Runtime) shows a lot of promise, the toolchain is still fragmented. As it stands, it’s hard to go from model to code to application without having to step out of your IDE. However, it’s possible to see how a future release of the AI Toolkit for Visual Studio Code can bundle the QNN ONNX runtimes, as well as make them available to use through DirectML for .NET application development. That future release needs to be sooner rather than later, as devices are already in developers’ hands. Getting AI inference onto local devices is an important step in reducing the load on Azure data centers. Yes, the current state of Arm64 AI development on Windows is disappointing, but that’s more because it’s possible to see what it could be, not because of a lack of tools. Many necessary elements are here; what’s needed is a way to bundle them to give us an end-to-end AI application development platform so we can get the most out of the hardware. For now, it might be best to stick with the Copilot Runtime and the built-in Phi-Silica model with its ready-to-use APIs.


The Role of AI in Low- and No-Code Development

While AI is invaluable for generating code, it's also useful in your low- and no-code applications. Many low- and no-code platforms allow you to build and deploy AI-enabled applications. They abstract away the complexity of adding capabilities like natural language processing, computer vision, and AI APIs from your app. Users expect applications to offer features like voice prompts, chatbots, and image recognition. Developing these capabilities "from scratch" takes time, even for experienced developers, so many platforms offer modules that make it easy to add them with little or no code. For example, Microsoft has low-code tools for building Power Virtual Agents (now part of its Copilot Studio) on Azure. These agents can plug into a wide variety of skills backed by Azure services and drive them using a chat interface. Low- and no-code platforms like Amazon SageMaker and Google's Teachable Machine manage tasks like preparing data, training custom machine learning (ML) models, and deploying AI applications. 


The 5 Worst Anti-Patterns in API Management

As a modern Head of Platform Engineering, you strongly believe in Infrastructure as Code (IaC). Managing and provisioning your resources in declarative configuration files is a modern and great design pattern for reducing costs and risks. Naturally, you will make this a strong foundation while designing your infrastructure. During your API journey, you will be tempted to take some shortcuts because it can be quicker in the short term to configure a component directly in the API management UI than setting up a clean IaC process. Or it might be more accessible, at first, to change the production runtime configuration manually instead of deploying an updated configuration from a Git commit workflow. Of course, you can always fix it later, but deep inside, those kludges stay there forever. Or worse, your API management product needs to provide a consistent IaC user experience. Some components need to be configured in the UI. Some parts use YAML, others use XML, and you even have proprietary configuration formats. 


Ownership and Human Involvement in Interface Design

When an interface needs to be built between two applications with different owners, without any human involvement, we have the Application Integration scenario. Application Integration is similar to IPC in some respects; for example, the asynchronous broker-based choice I would make in IPC, I would also make for Application Integration for more or less the same reasons. However, in this case, there is another reason to avoid synchronous technologies: ownership and separation of responsibilities. When you have to integrate your application with another one, there are two main facts you need to consider: a) Your knowledge of the other application and how it works is usually low or even nonexistent, and b) Your control of how the other application behaves is again low or nonexistent. The most robust approach to application integration (again, a personal opinion!) is the approach shown in Figure 3. Each of the two applications to be integrated provides a public interface. The public interface should be a contract. This contract can be a B2B agreement between the two application owners.


Reports show ebbing faith in banks that ignore AI fraud threat

The ninth edition of its Global Fraud Report says businesses are worried about the rate at which digital fraud is evolving and how established fraud threats such as phishing may be amplified by generative AI. Forty-five percent of companies are worried about generative AI’s ability to create more sophisticated synthetic identities. Generative AI and machine learning are named as the leading trends in identity verification – both the engine for, and potential solution to, a veritable avalanche of fraud. IDology cites recent reports from the Association of Certified Fraud Examiners (ACFE), which say businesses worldwide lose an estimated 5 percent of their annual revenues to fraud. “Fraud is changing every year alongside growing customer expectations,” writes James Bruni, managing director of IDology, in the report’s introduction. “The ability to successfully balance fraud prevention with friction is essential for building customer loyalty and driving revenue.” “As generative AI fuels fraud and customer expectations grow, multi-layered digital identity verification is essential for successfully balancing fraud prevention with friction to drive loyalty and grow revenue.”


What IT Leaders Can Learn From Shadow IT

Despite its shady reputation, shadow IT is frequently more in tune with day-to-day business needs than many existing enterprise-deployed solutions, observes Jason Stockinger, a cyber leader at Royal Caribbean Group, where he's responsible for shoreside and shipboard cyber security. "When shadow IT surfaces, organization technology leaders should work with business leaders to ensure alignment with goals and deadlines," he advises via email. ... When assessing a shadow IT tool's potential value, it's crucial to evaluate how it might be successfully integrated into the official enterprise IT ecosystem. "This integration must prioritize the organization's ability to safely adopt and incorporate the tool without exposing itself to various risks, including those related to users, data, business, cyber, and legal compliance," Ramezanian says. "Balancing innovation with risk management is paramount for organizations to harness productivity opportunities while safeguarding their interests." IT leaders might also consider turning to their vendors for support. "Current software provider licensing may afford the opportunity to add similar functionality to official tools," Orr says.



Quote for the day:

"Ninety percent of leadership is the ability to communicate something people want." -- Dianne Feinstein

Daily Tech Digest - June 26, 2024

How Developers Can Head Off Open Source Licensing Problems

There are proactive steps developers can take as well. For instance, developers can opt for code that isn’t controlled by a single vendor. “The other side, beyond the licensing, is to look and to understand who’s behind the license, the governance, policy,” he said. Another option to provide some cushion of protection is to use a vendor that specializes in distributing a particular open source solution. A distro vendor can provide indemnification against exposure, he said. They also provide other benefits, such as support and certification to run on specific hardware set-ups. Developers can also look for open source solutions that are under a foundation, rather than a single company, he suggested, although he cautioned that even that isn’t a failsafe measure. “Even foundations are not bulletproof,” he said. “Foundations provide some oversight, some governance and some other means to reduce the risk. But if ultimately, down the path, it ends up again being backed up by a single vendor, then it’s an issue even under a foundation.”


Line of Thought: A Primer on State-Sponsored Cyberattacks

A cyberattack may be an attractive avenue for a state actor and/or its affiliates since it may give them the ability to disrupt an adversary while maintaining plausible deniability.15 It may also reduce the risk of a retaliatory military strike by the victim.16 That’s because actually determining who was behind a cyberattack is notoriously difficult: attacks can be shrouded behind impersonated computers or hijacked devices and it may take months before actually discovering that an attack has occurred.17 Some APTs leverage an approach called “living off the land” which enables them to disguise an attack as ordinary network or system activities.18 Living off the land enabled one APT actor to reportedly enter network systems in America’s critical infrastructure and conduct espionage—reportedly with an eye toward developing capabilities to disrupt communications in the event of a crisis.19 The attack occurred sometime in 2021, but, due to the stealthy nature of living off the land techniques, wasn’t identified until 2023.


Taking a closer look at AI’s supposed energy apocalypse

Determining precisely how much of that data center energy use is taken up specifically by generative AI is a difficult task, but Dutch researcher Alex de Vries found a clever way to get an estimate. In his study "The growing energy footprint of artificial intelligence," de Vries starts with estimates that Nvidia's specialized chips are responsible for about 95 percent of the market for generative AI calculations. He then uses Nvidia's projected production of 1.5 million AI servers in 2027—and the projected power usage for those servers—to estimate that the AI sector as a whole could use up anywhere from 85 to 134 TWh of power in just a few years. To be sure, that is an immense amount of power, representing about 0.5 percent of projected electricity demand for the entire world (and an even greater ratio in the local energy mix for some common data center locations). But measured against other common worldwide uses of electricity, it's not representative of a mind-boggling energy hog. A 2018 study estimated that PC gaming as a whole accounted for 75 TWh of electricity use per year, to pick just one common human activity that's on the same general energy scale


Stepping Into the Attacker’s Shoes: The Strategic Power of Red Teaming

Red Teaming service providers are spending years preparing their infrastructure to conduct Red Teaming exercises. It is not feasible to quickly build a customized infrastructure for a specific customer; this requires prior development. Tailoring the service to a particular client can take anywhere from one to four months. During this period, preliminary exploration takes place. Red Teams use this time to identify and construct a combination of infrastructure elements that will not raise alarms among SOC defenders. ... The focus has shifted towards building a more layered defense, driven by Covid restrictions, remote work and the transition to the cloud. As companies enhance their defensive measures, there is a growing need to conduct Red Teaming projects to evaluate the effectiveness of these new systems and solutions. The risk of increased malicious insider activity has made the hybrid model increasingly relevant for many Red Teaming providers. This approach is neither a complete White Box, where detailed infrastructure information is provided upfront, nor traditional Red Teaming.


Six NFR strategies to improve software performance and security

Based on their analysis and discussions with developers, the researchers identified six key points: Prioritization and planning: NFRs should be treated with as much priority as other requirements. They should be planned in advance and reviewed throughout a development project. Identification and discussion: NFRs should be identified and discussed early in the development process, ideally in the design phase. During the evolution of the software, these NFRs should be revisited if necessary. Use of technologies allied with testing: The adequacy of the NFR can be verified through technologies already approved by the market, where the NFRs associated with those projects satisfy the project's complexity. Benchmarks: Using benchmarks to simulate the behavior of a piece of code or algorithm under different conditions is recommended, since it allows developers to review and refactor code when it is not meeting the project-specified NFRs. Documentation of best practices: By keeping the NFRs well-documented, developers will have a starting point to address any NFR problem when they appear.


Exploring the IT Architecture Profession

In IT architecture, it takes many years to gain the knowledge and skills required to be a professional architect. In my opinion, at the core of our profession are our knowledge and skills in technology. This is what we bring to the table; it is our knowledge and expertise in both business and technology that make the IT architecture profession unique. In addition to business and technology skills, it is essential that the architect possesses soft skills such as leadership, politics, and people management. These are often undervalued. When communicating IT architecture and what an IT architect does, I notice that there are a number of recurring aspects: scope, discipline, domain, and role. ... Perhaps the direction for the profession is to focus on gaining consensus around how we describe scope, domain and discipline rather than worrying too much about titles. An organisation should be able to describe a role from these aspects and describe the required seniority. At the end of the day, this was a thought-provoking exercise and with regards to my original problem, the categorisation of architecture views, I found that scope was perhaps the simplest way to organise the book.


Why collaboration is vital for achieving ‘automation nirvana’

Beeson says that one of the main challenges of implementing automation is getting different teams to collaborate on creating automation content. He explains that engineers and developers often have their own preferred programming language or tools and can be reluctant to share content or learn something new. “A lack of collaboration prevents the ‘automation nirvana’ of removing humans from complex processes, dramatically reducing automation benefits,” he says. “Individuals tend to be reluctant to contribute if they don’t have confidence in the automation tool or platform. “Automation content developers want the automation language to be easy to learn, compatible with their technology choices and provide control to ensure the content they contribute is not misused or modified.” ... When it comes to the future of automation, Beeson has no shortage of thoughts and predictions for the sector, especially relating to the role of automation in defence. “Defence is not immune from the ‘move to cloud’ trend, so hybrid cloud automation is becoming ever more prevalent in the sector,” he says


Securing the digital frontier: Crucial role of cybersecurity in digital transformation advisory

Advisory services have the expertise to perform in-depth technical security assessments to identify and help prioritize vulnerabilities in an organization’s infrastructure. These assessments include the use of specialised tools and manual testing to do a comprehensive assessment. Systems are examined to validate if they are following security best practices and prescribed industry standards. ... Advisors help organisations develop threat models to identify potential attack vectors and assess associated risks. Several methodologies like STRIDE, Kill Chain and PASTA are used to systematically analyse threats and risks. ... An organisation’s security is only as good as its weakest link, and generally, the weakest link is an individual of the organisation. Advisory services undertake regular training to educate and inform employees on security best practices. They can also support with simulation training such as phishing simulations and develop comprehensive security awareness programs that cover topics like secure password practices, data handling, data privacy, and incident reporting.


Delving Into the Risks and Rewards of the Open-Source Ecosystem

While some risk is inevitable, enterprise teams need to have an understanding of that risk and use open-source software accordingly. “As a CISO, the biggest risk I see is for organizations not to be intentional about how they use open-source software,” says Hawkins. “It's extremely valuable to build on top of these great projects, but when we do, we need to make sure we understand our dependencies. Including the evaluation of the open-source components as well as the internally developed components is key to being able to accurately [understand] our security posture.” ... So, it isn’t feasible to ditch open-source software, and risk is part of the deal. For enterprises, that reality necessitates risk management. And that need only increases as does reliance on open-source software. “As we move towards cloud and these kind of highly dynamic environments, our dependency on open-source is going up even higher than it ever did in the past,” says Douglas. If enterprise leaders shift how they view open-source software, they may be able to better reap its rewards while mitigating its risks.


Rethinking physical security creates new partner opportunities

Research conducted by Genetec has showed a 275% increase in the number of end users wanting to take more physical security workloads to the cloud. Research also indicates that many organisations aren’t treating SaaS and cloud as an ‘all or nothing’ proposition. However, while a hybrid-cloud infrastructure provides flexibility, it also has implications in being the gateway to the physical security cloud journey. Organisations needs to ensure that there are tools in place that can protect data regardless of their location. ... Organisations that aren’t able to keep up with the upgrade cycle often become subject to the consumption gap. This is where the end user can see the platform evolving with new features and functionality, but are unable to take advantage of all of it. The bigger the consumption gap, the more likely it’s to be holding the organisation back from physical security best practices. SaaS promises to close that gap because it keeps organisations on the latest software version. Importantly, their solution is updated in a way that is pre-approved by the organisation and on a timeframe of their choosing.



Quote for the day:

"Without growth, organizations struggle to add talented people. Without talented people, organizations struggle to grow." -- Ray Attiyah

Daily Tech Digest - June 25, 2024

Six Strategies For Making Smarter Decisions

Broaden your options - Instead of Options A and B, what about C or even D? A technique I use in working with client organizations is to set up a “challenge statement” that inevitably reveals multiple possibilities to be decided upon. I’ll have small groups of four or five people take 10 minutes to list all the options without discussing or critiquing them during the exercise. Frame challenge statements thusly: “In what ways might we accomplish X?” ... Listen to your gut - Intuition is knowing something without knowing quite how we know it. All of us have it, but in a data-driven world, listening to it becomes harder. Before making an important choice, one executive I interviewed gathers information, weighs all the facts – then takes time to stop and listen to what his gut is telling him. “When a decision doesn’t feel good,” one executive commented, “It feels like a stomachache. And when a decision feels right, it’s like I’ve eaten a great meal. If I don’t feel good in my gut about a decision, I don’t care if the numbers say we’re going to make a billion dollars, I won’t go ahead with it. That’s how important intuition is to me.”


Overcoming Stagnation And Implementing Change To Facilitate Business Growth: The How-To

Overcoming stagnation is about understanding that doing the same thing over and over again will give you the same results over and over again. But bringing about change in the former will naturally impact the latter. The three main objectives in any transformation initiative that aims to set up a strong foundation to scale or grow a business are: become financially lean with the ability to scale either up or down as per market demands, become internally efficient, and to run its day-to-day operations independent of its founder or leader. ... Ideally, it would be wise to aim to maintain 60-70% of the total operating cost as fixed costs, while keeping the remaining as variable costs, allowing for flexibility to adjust the costing structure based on business needs, while maintaining profitability throughout the transition- and beyond. When an efficient business achieves this level of financial optimization and is managed by a competent team, then the founder or leader will have the time to work on the business, concentrating on long-term growth strategic issues, instead of the day-to-day of the enterprise.


Build your resilience toolkit: 3 actionable strategies for HR leaders

Go beyond current job descriptions to identify talent or skill gaps. Focus on future-focused talent acquisition strategies and design upskilling and reskilling programs. Aim to close the skills gap and attract talent with transferable skill sets and a growth mindset. This approach keeps your workforce adaptable and prepared for future challenges. ... Adapting work models and fostering continuous learning cultures are essential. HR leaders can implement flexible work arrangements, such as remote or hybrid models. Encouraging experimentation and risk-taking within teams, and integrating continuous learning opportunities into performance management systems, are key actionable tips. Agile approaches help HR leaders adapt quickly to shifting business requirements. Collaborative work environments are critical in an agile HR strategy. ... Open communication and safe spaces are essential for a supportive culture. HR leaders can encourage employees to voice concerns by creating channels for open dialogue. This approach ensures employees feel heard and valued, contributing to a more inclusive workplace.


The 4 skills you need to engineer a career in automation

Automation engineers are often required to work cohesively with multidisciplinary teams and for that reason, it can be useful to have a solid grasp of workplace soft skills, in addition to compulsory hard skills. Automation engineers are expected to take complex, highly nuanced information and relay it back to not only their peers, but to people who do not have a strong technical background. This requires expert communication skills, as well as an ability to collaborate. ... If you are considering a career as an automation engineer, then a foundational understanding of programming languages and how they are applied is compulsory, as you will frequently need to write and maintain the code that keeps operation systems running. The choice of programming language greatly impacts the success of automation in the workplace, as it will provide and improve versatility, scalability and integration. ... As AI advances, global workplaces will have to evolve in tandem, meaning automation engineers will have to have a standard level of AI and machine learning skills to stay competitive. 


Navigating the Evolving World of Cybersecurity Regulations in Financial Services

Accountability for cybersecurity measures is a key element of the NYDFS regulations. CISOs now must provide a report updating their governing body or board of directors on the company’s cybersecurity posture and plans to fix any security gaps, Burke says. Maintaining accountability entails communicating with the board about cybersecurity risks, explains Kirk J. Nahra, partner and co-chair of the cybersecurity and privacy practice at law firm WilmerHale. “The board needs to understand that its job is to evaluate major issues for a company, and a ransomware attack that shuts down the whole business is a major risk,” Nahra says. “The boards have to become more sophisticated about information security.” ... The NYDFS calls for organizations to have cybersecurity policies that are reviewed and approved annually. Previously, regulations concentrated more on processes and best practices, Nahra says. Now, they are becoming more prescriptive, but multiple regulators are inconsistent, and their standards may conflict at times.


How Banks Can Get Past the AI Hype and Deliver Real Results

If the bank’s backend systems aren’t automated, all the rapidly responding chatbot has done is make a promise that a human will have to solve when they finally get to that point in the inbox, Bandyopadhyay says. When they ultimately get back to the customer, that efficient chatbot doesn’t actually look so efficient. Bandyopadhyay explains that this is merely meant as an illustration of the bank has to be ready for front ends and back ends of customer-facing systems to be in synch. The potential result is alienating customers with significant problems. ... The real power of GenAI is its ability to digest and deploy unstructured data. But Bandyopadhyay points out that most banks use legacy systems that can’t capture any of that information. “It’s not data that you put in rows and columns on a spreadsheet,” says Bandyopadhyay. “It’s what language we write and that we speak.” To truly implement GenAI in the long run, he continues, banks will have to lick the longstanding legacy systems problem. Until then, most of their databases aren’t talking GenAI’s language.


Singapore lays the groundwork for smart data center growth

In a move that stunned industry observers, Singapore announced on May 30 that it would release more data center capacity to the tune of 300MW, a substantial figure and a new policy direction for the nation-state. ... The 300MW will come as part of a newly unveiled Green Data Centre (DC) Roadmap drawn up by IMDA, so it does have conditions attached. According to the statutory board, the roadmap was developed to chart a “sustainable pathway” for the continued growth of data centers in Singapore to support the nation’s digital economy. Per the roadmap, Singapore hopes to work with the industry to pioneer solutions for more resource-efficient data centers. One way to view it is as a carrot that it can use to spur data center operators to innovate and accelerate data center efficiency on both hardware and software levels. It is all well and good to talk about allocating hundreds of megawatts of capacity for data centers. But with electrical grids around the world heaving from electrification and sharply rising power demands, is Singapore in a position to deliver this capacity to data center operators today?


Information Blocking of Patient Records Could Cost Providers

Information blocking is defined as a practice that is likely to interfere with the access, exchange or use of electronic health information, except as required by law or specified in one of nine information blocking exceptions. ... Under the security exception, it is not considered information blocking for an actor to interfere with the access, exchange or use of EHI to protect the security of that information, provided certain conditions are met. For example, during a security incident, such as a ransomware attack, a healthcare provider might be unable to provide access or exchange to certain EHI for a time, and that would not constitute information blocking. ... So, as of now, if a healthcare provider does not participate in any of the CMS payment programs that are currently subject to the disincentives, they do not face any potential penalties for information blocking. But that could change moving forward. HHS officials during a briefing with media on Monday said HHS is considering adding other disincentives for healthcare providers that do not participate in such CMS programs. 


How is AI transforming the insurtech sector?

The use of AI also brings risks and ethical considerations for insurers and insurtech firms. “With all AI, you need to understand where the AI models are from and where the data is being trained from and, importantly, whether there is an in-built bias,” says Kevin Gaut, chief technology officer at insurtech INSTANDA. “Proper due diligence on the data is the key, even with your own internal data.” It’s essential, too, that organisations can explain any decisions that are taken, warns Muylle, and that there is at least some human oversight. “A notable issue is the black-box nature of some AI algorithms that produce results without explanation,” he warns. “To address this, it’s essential to involve humans in the decision-making loop, establish clear AI principles and involve an AI review board or third party. Companies can avoid pitfalls by being transparent with their AI use and co-operating when questioned.” AI applications themselves also raise the potential for organisations to get caught out in cyber-attacks. “Perpetrators can use generative AI to produce highly believable yet fraudulent insurance claims,” points out Brugger. 


Evaluating crisis experience in CISO hiring: What to look for and look out for

So long as a candidate’s track record is verifiable and clear in its contribution to intrusion events, direct experience of a crisis may actually be more indicative of future success than more traditional metrics. By contrast, be wary of the “onlookers,” those individuals with qualifications but whose learned experience comes from arm’s length involvement in a crisis. While such persons may contribute positively to their organization, the role of the crisis in their hiring should be de-emphasized relative to more conventional metrics of future performance. ... The emerging consensus of research is that being present for multiple stages of the response lifecycle — being impacted by an attack’s disruptions or helping with preparedness for a future response — is far better experience than simply witnessing an attack. Those who experience the initial effects of a compromise or other attack and then go on to orient, analyze, and engage in mitigation activities are the ones for whom over-generalization and perverse informational reactions appear less likely.



Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden

Daily Tech Digest - June 20, 2024

Measure Success: Key Cybersecurity Resilience Metrics

“Cyber resilience is a newer concept. It can get thrown around when one really means cybersecurity, and also in cases where no one really cares about the difference between the two,” says Mike Macado, CISO at BeyondTrust, an identity and access security company. “And to be fair, there can be some blurring between the two. ... “Once the resilience objectives are clear, KPIs can be set to measure them. While there are many abstract possible KPIs, it is crucial to set meaningful and measurable KPIs that can indicate your cyber resilience level and not only tick the box,” says Kellerman. And what are the meaningful, core KPIs? “These include mean time to detect, mean time to respond, recovery time objective, recovery point objective, percentage of critical systems with exposures, employee awareness and phishing click-rates, and an overall assessment of leadership. These KPIs will properly assess your security controls and whether they are protecting your critical path assets, helping to ensure they’re capable of preventing threats.” Kellerman adds. ... “The ability to recover from a cybersecurity attack within a reasonable time that guarantees business continuity is a crucial indicator of resilience...” says Joseph Nwankpa


Most cybersecurity pros took time off due to mental health issues

“Cybersecurity professionals are at the forefront of a battle they know they are going to lose at some point, it is just a matter of time. It’s a challenging industry and businesses need to recognize that without motivation, cybersecurity professionals won’t be at the top of their game. We’ve worked with both cybersecurity and business leaders to understand the challenges the industry faces. What we’ve discovered shows just how difficult the job is and that there is a significant gap of understanding between the board and the professionals,” said Haris Pylarinos, CEO at Hack The Box. “We’re calling for business leaders to work more closely with cybersecurity professionals to make mental well-being a priority and actually provide the solutions they need to succeed. It’s not just the right thing to do, it makes business sense,” concluded Pylarinos. “Stress, burnout and mental health in cybersecurity is at an all-time high. It’s also not just the junior members of the team, but right up to the CISO level too,” said Sarb Sembhi, CTO at Virtually Informed.


Forget Deepfakes: Social Listening Might be the Most Consequential Use of Generative AI in Politics

Ultimately, the most vulnerable individuals likely to be affected by these trends are not voters; they are children. AI chatbots are already being piloted in classrooms. “Children are once again serving as beta testers for a new generation of digital tech, just as they did in the early days of social media,” writes Caroline Mimbs Nyce for The Atlantic. The risks from generative AI outputs are well documented, from hallucinatory responses to search inquiries to synthetic nonconsensual sexual imagery. Given the rapid normalization of surveillance in education technology, more attention should probably be paid to the inputs such systems collect from kids. ... Not every AI problem requires a policy solution specific to AI: a federal data privacy law that applied to campaigns and political action committees would go a long way toward regulating generative AI-enabled social listening, and could have been put in place long before that technology became widely accessible. The fake Biden robocalls in New Hampshire similarly commend low-tech responses to high-tech problems: the political consultant behind them is charged not with breaking any law against AI fakery but with violating laws against voter suppression.


Resilience in leadership: Navigating challenges and inspiring success

Research shows that cultivating resilience is a long and arduous journey that requires self-awareness, emotional intelligence, and a relentless commitment to personal growth. A great example of this quality and a leader I admire greatly is Jensen Huang, President of Nvidia, which is now one of the most valuable companies in the world with a market cap of more than $2 trillion. As Huang describes quite candidly in many interviews, his early years and the hardships he endured helped him build resilience, where he learnt to brush things off and move on no matter how difficult the situation was. While addressing the grad students at Stanford Graduate School of Business, Huang revealed that “I wish upon you ample doses of pain and suffering,” as he believes great character is only formed out of people who have suffered. These experiences have not only helped Huang develop a robust management style but have also helped him approach any problem with the mindset of “How hard can it be?” While Jensen’s life exemplifies the importance of hardships and suffering, resilience isn't limited to overcoming hardships; it's also about innovation and adaptability in leadership. 


IDP vs. Self-Service Portal: A Platform Engineering Showdown

It’s easy to get lost in the sea of IT acronyms at the best of times, and the platform engineering ecosystem is the same, particularly given that these two options seem to promise similar things but deliver quite differently. For many organizations, choosing or building an IDP might be what they think is required to save their developers from repetitive work while looking for a self-service portal (SSP) to streamline automation. ... By providing a user-friendly interface to define and deploy cloud resources, an SSP frees up the time and effort required to set up complex infrastructure configurations. Centralizing resources provides oversight while also enabling guardrails to be established to protect against “shadow IT” being deployed. This not only helps identify resources that aren’t being used to save money but also helps make cloud practices more eco-friendly by removing unnecessary resources. This is the main difference between an SSP and an IDP, and understanding which capabilities an organization needs is critical for ensuring a smooth platform engineering journey. Like a Russian doll, an IDP is a layer on top of an SSP that offers tools to streamline the entire software development lifecycle.


Chinese Hackers Used Open-Source Rootkits for Espionage

Attackers exploited an unauthenticated remote command execution zero-day on VMware vCenter tracked as CVE-2023-34048. If the threat group failed to gain initial access on the VMware servers, the attackers targeted similar flaws in FortiOS, a flaw in VMware vCenter called postgresDB, or a VMware Tools flaw. After compromising the edge devices, the group's pattern has been to deploy open-source Linux rootkit Reptile to target virtual machines hosted on the appliance. It uses four rootkit components to capture secure shell credentials. These include Reptile.CMD to hide files, processes and network connections; Reptile.Shell to listen to specialized packets- a kernel level file to modify the .CMD file to achieve rootkit functionality; and a loadable kernel file for decrypting the actual module and loading it into the memory. "Reptile appeared to be the rootkit of choice by UNC3886 as it was observed being deployed immediately after gaining access to compromised endpoints," Mandiant said. "Reptile offers both the common backdoor functionality, as well as stealth functionality that enables the threat actor to evasively access and control the infected endpoints via port knocking."


What are the benefits of open access networks?

Toomey says there are various benefits to open access networks, with a key benefit being the fostering of competition. “This competition drives innovation as providers strive to offer the best services and technologies to attract and retain customers,” she said. “Additionally, open access networks can reduce costs for service providers by sharing infrastructure, leading to more affordable services for end-users. “These networks also promote greater network efficiency and resource utilisation, benefiting the entire telecom ecosystem.” But there are challenges with building an open access network, as Toomey said there are high costs in building and maintaining the necessary infrastructure. Enet invested €50m in 2022 to expand its fibre network, but saw its profits fall 47pc to €3.7m in the same year. “Additionally, there is a risk of overbuild, where multiple networks are constructed in the same area, leading to inefficient resource use,” Toomey said. “Another challenge is the centralised thinking on network roll-out in cities, which can neglect rural and underserved areas, creating a digital divide. “Addressing these challenges requires strategic planning and investment, as well as collaboration with government and industry stakeholders to ensure balanced network development.”


CIOs take note: Platform engineering teams are the future core of IT orgs

The core roles in a platform engineering team range from infrastructure engineers, software developers, and DevOps tool engineers, to database administrators, quality assurance, API and security engineers, and product architects. In some cases teams may also include site reliability engineers, scrum masters, UI/UX designers, and analysts who assess performance data to identify bottlenecks. And according to Joe Atkinson, chief products and technology officer at PwC, these teams offer a long list of benefits to IT organizations, including building and maintaining scalable, flexible infrastructure and tools that enable efficient operations; developing standardized frameworks, libraries, and tools to enable rapid software development; cutting costs by consolidating infrastructure resources; and ensuring security and compliance at the infrastructure level. ... You can’t have a successful platform engineering team without building the right culture, says Jamie Holcombe, USPTO CIO. “If you don’t inspire the right behavior then you’ll get people who point at each other when something goes wrong.” And don’t withhold information, he adds. 


What is the current state of Security Culture in Europe?

Organizations prioritizing the establishment and upkeep of a security culture will encourage notably heightened security awareness behaviors among their employees. Examining this further, research has shown that organizations in Europe have a good understanding of security culture as both a process and a strategic measure. However, many have yet to take their first tactical steps toward achieving that goal. Those who have done so realize that shaping security behaviors is essential in developing a security culture. ... Delving deeper, smaller European organisations score higher in security culture due to more effective personal communication, stronger community bonds and better support for security issues. This naturally leads to enhanced Cognition and Compliance, with improvements in communication channels posited as a key driver for better security policy understanding and proactive security behaviours that outperform global averages. Conducting an examination of which industries displayed the best security culture within Europe, it is certainly gaining traction among security experts within sectors like finance, banking and IT, which are all heavily digitized.


Data Integrity: What It Is and Why It Matters

While data integrity focuses on the overall reliability of data in an organization, Data Quality considers both the integrity of the data and how reliable and applicable it is for its intended use. Preserving the integrity of data emphasizes keeping it intact, fully functional, and free of corruption for as long as it is needed. This is done primarily by managing how the data is entered, transmitted, and stored. By contrast, Data Quality builds on methods for confirming the integrity of the data and also considers the data’s uniqueness, timeliness, accuracy, and consistency. Data is considered “high quality” when it ranks high in all these areas based on the assessment of data analysts. High-quality data is considered trustworthy and reliable for its intended applications based on the organization’s data validation rules. The benefits of data integrity and Data Quality are distinct, despite some overlap. Data integrity allows a business to recover quickly and completely in the event of a system failure, prevent unauthorized access to or modification of the data, and support the company’s compliance efforts. 



Quote for the day:

“Failures are finger posts on the road to achievement.” -- C.S. Lewis

Daily Tech Digest - June 19, 2024

Executive Q&A: Data Quality, Trust, and AI

Data observability is the process of interrogating data as it flows through a marketing stack -- including data used to drive an AI process. Data observability provides crucial visibility that helps users both interrogate data quality and understand the level of data quality prior to building an audience or executing a campaign. Data observability is traditionally done through visual tools such as charts, graphs, and Venn diagrams, but is itself becoming AI-driven, with some marketers using natural language processing and LLMs to directly interrogate the data used to fuel AI processes. ... In a way, data silos are as much a source of great distress to AI as they are to the customer experience itself. A marketer might, for example, use a LLM to help generate amazing email subject lines, but if AI generates those subject lines knowing only what is happening in that one channel, it is limited by not having a 360-degree view of the customer. Each system might have its own concept of a customer’s identity by virtue of collecting, storing, and using different customer signals. When siloed data is updated on different cycles, marketers lose the ability to engage with a customer in the precise cadence of the customer because the silos are out of synch with a customer journey.


Only 10% of Organizations are Doing Full Observability. Can Generative AI Move the Needle?

The potential applications of Generative AI in observability are vast. Engineers could start their week by querying their AI assistant about the weekend’s system performance, receiving a concise report that highlights only the most pertinent information. This assistant could provide real-time updates on system latency or deliver insights into user engagement for a gaming company, segmented by geography and time. Imagine being able to enjoy your weekend and arrive at work with a calm and optimistic outlook on Monday morning, and essentially saying to your AI assistant: “Good morning! How did things go this weekend?” or “What’s my latency doing right now, as opposed to before the version release?” or “Can you tell me if there have been any changes in my audience, region by region, for the past 24 hours?” These interactions exemplify how Generative AI can facilitate a more conversational and intuitive approach to managing development infrastructure. It’s about shifting from sifting through data to engaging in meaningful dialogue with data, where follow-up questions and deeper insights are just a query away.


The Ultimate Roadmap to Modernizing Legacy Applications

IT leaders say they plan to spend 42 percent more on average on application modernization because it is seen as a solution to technical debt and a way for businesses to reach their digital transformation goals, according to the 2023 Gartner CIO Agenda. But even with that budget allocated, businesses still face significant challenges, such as cost constraints, a shortage of staff with appropriate technical expertise, and insufficient change management policies to unite people, processes and culture around new software. To successfully navigate the path forward, IT leaders need a strategic roadmap for application modernization. The plan should include prioritizing which apps to upgrade, aligning the effort with business objectives, getting stakeholder buy-in, mapping dependencies, creating data migration checklists and working with trusted partners to get the job done. ... “Even a minor change to the functionality of a core system can have major downstream effects, and failing to account for any dependencies on legacy apps slated for modernization can lead to system outages and business interruptions,” Hitachi Solutions notes in a post.


Is it time to split the CISO role?

In one possible arrangement, a CISO reports to the CEO and a chief security technology officer (CSTO), or technology-oriented security person, reports to the CIO. At a functional level, putting the CSTO within IT gives the CIO a chance to do more integration and collaboration and unites observability and security monitoring. At the executive level, there’s a need to understand security vulnerabilities and the CISO could assist with strategic business risk considerations, according to Oltsik. “This kind of split could bring better security oversight and more established security cultures in large organizations.” ... To successfully change focus, CISOs would need to get a handle on things like the financials and company strategy and articulate cyber controls in this framework, instead of showing up every quarter with reports and warnings. “CISOs will need to incorporate their risk taxonomy into the overall enterprise risk taxonomy,” Joshi says. In this arrangement, however, the budget could arise as a point of contention. CIO budgets tend to be very cyber heavy these days, Joshi explains, and it could be difficult to create the situation where both the CISO and CIO are peers without impacting this allocation of funds.


Empowering IIoT Transformation through Leadership Support

To gain project acceptance and ultimately ensure project success will rely heavily on identifying all key stakeholders, nurturing an on-going level of mutual trust and maintaining a strong focus on targeted end results. This involves a full disclosure of desired outcomes and a willingness to adapt to individual departmental nuances. Begin with a cross-department kickoff/planning meeting to identify interested parties, open projects, and available resources. Invite participation through a discovery meeting, focusing on establishing the core team, primary department, cross-department dependencies, and consolidating open projects or shareable resources. ... Identifying all digital data blind spots at the outset highlights the scale of the problem. While many companies have Artificial Intelligence (AI) and Business Intelligence (BI) initiatives, their success depends on the quality of the source data. Consolidating these initiatives to address digital data blind spots strengthens the data-driven business case. Once a critical mass of baselines is established, projecting Return On Investment (ROI) from both a quantification and qualification perspective becomes possible. 


Will more AI mean more cyberattacks?

Organisations are also potentially exposing themselves to cyber threats through their own use of AI. According to research by law firm Hogan Lovells, 56 per cent of compliance leaders and C-suite executives believe misuse of generative AI within their organisation is a top technology-associated risk that could impact their organisation over the next few years. Despite this, over three-quarters (78 per cent) of leaders say their organisation allows employees to use generative AI in their daily work. One of the biggest threats here is so-called ‘shadow AI’, where criminals or other actors make use of, or manipulate, AI-based programmes to cause harm. “One of the key risks lies in the potential for adversaries to manipulate the underlying code and data used to develop these AI systems, leading to the production of incorrect, biased or even offensive outcomes,” says Isa Goksu, UK and Ireland chief technology officer at Globant. “A prime example of this is the danger of prompt injection attacks. Adversaries can carefully craft input prompts designed to bypass the model’s intended functionality and trigger the generation of harmful or undesirable content.” Jow believes organisations need to wake up to the risk of such activities.


What It Takes to Meet Modern Digital Infrastructure Demands and Prepare for Any IT Disaster

As you evaluate the evolving needs of your organization’s own infrastructure demands, consider whether your network is equipped to handle a growing volume of data-intensive applications — and if your team is ready to act in the face of unexpected service interruption. The push to adopt advanced technologies like AI and automation are the main drivers of network optimization for most organizations. But the growing prevalence of volatile, uncertain, complex, and ambiguous (VUCA) situations is another reason to review your communications infrastructure’s readiness to withstand future challenges. VUCA is a catch-all term for a wide range of unpredictable and challenging situations that can impact an organization’s operations, from natural disasters to political conflict, economic instability, or cyber-attacks. ... Maintaining operational continuity and resilience in the face of VUCA events requires a combination of strategic planning, operational flexibility, technological innovation, and risk-management practices. This includes investing in technology that improves agility and resilience as well as in people who are prepared for adaptive decision-making when VUCA situations arise.


APIs Are the Building Blocks of Bank Innovation. But They Have a Risky Dark Side

A key point is that it’s not just institutions suffering. Frequently APIs used by banks draw on PII (personally identifiable information) such as social security numbers, driver’s license data, medical information and personal financial data. APIs may also handle device and location data. “While this data may not seem as sensitive as PII or payment card details at first glance, it can still be exploited by malicious actors to gain insights into a user’s behavior, preferences and movements,” the report says. “In the wrong hands, this information could be used for targeted phishing attacks, social engineering, or even physical threats.” “Everything in the financial transaction world today is going across the internet, via APIs,” says Bird. ... Bird points out that the bad guys have more than just tools from the dark web to help them do their business. Frequently they tap the same mainstream tools that bankers would use. He laughs when he recalls demonstrating to a reporter how a particular fraud would have been assisted using Excel pivot tables. The journalist didn’t think of criminals using legitimate software. “Why wouldn’t they?” said Bird.


Enterprise AI Requires a Lean, Mean Data Machine

Today’s LLMs need volume, velocity, and variety of data at a rate not seen before, and that creates complexity. It’s not possible to store the kind of data LLMs require on cache memory. High IOPS and high throughput storage systems that can scale for massive datasets are a required substratum for LLMs where millions of nodes are needed. With superpower GPUs capable of lightning-fast read storage read times, an enterprise must have a low-latency, massively parallel system that avoids bottlenecks and is designed for this kind of rigor. ... It’s crucial that these technological underpinnings of the AI era be built with cost efficiency and reduction of carbon footprint in mind. We know that training LLMs and the expansion of generative AI across industries are ramping up our carbon footprint at a time when the world desperately needs to reduce it. We know too that CIOs consistently name cost-cutting as a top priority. Pursuing a hybrid approach to data infrastructure helps ensure that enterprises have the flexibility to choose what works best for their particular requirements and what is most cost-effective to meet those needs.


Building Resilient Security Systems: Composable Security

The concept of composable security represents a shift in the approach to cybersecurity. It involves the integration of cybersecurity controls into architectural patterns, which are then implemented at a modular level. Instead of using multiple standalone security tools or technologies, composable security focuses on integrating these components to work in harmony. ... The concept of resilience in composable security is reflected in a system's ability to withstand and adapt to disruptions, maintain stability, and persevere over time. In the context of microservices architecture, individual services operate autonomously and communicate through APIs. This design ensures that if one service is compromised, it does not impact other services or the entire security system. By separating security systems, the impact of a failure in one system unit is contained, preventing it from affecting the entire system. Furthermore, composable systems can automatically scale according to workload, effectively managing increased traffic and addressing new security requirements.



Quote for the day:

"The task of leadership is not to put greatness into humanity, but to elicit it, for the greatness is already there." -- John Buchan

Daily Tech Digest - June 18, 2024

The Intersection of AI and Wi-Fi 7

Wi-Fi 7 is the newest standard in wireless networking. Though official ratification isn't expected until the end of 2024, Wi-Fi 7 client devices and wireless access points are already available. The top line speed of Wi-Fi 7 is often stated at 46 Gbps, but actual speeds will be lower. The higher speeds of Wi-Fi 7 are delivered by using a 320 MHz wide channel, increasing the transmission rate to 4K QAM and increasing the number of transmit and receive chains to 16. Another key advantage of Wi-Fi 7 is a significant reduction in packet latency, thanks to a feature called Multi-Link Operation (MLO). ... AI Autonomous Networks consolidate key performance indicators to aid decision-making. During the shift from 2.4 GHz and 5 GHz to 6 GHz networking, IT managers can use AI to expose timing and predict improvements, facilitating timely network upgrades. Another example is digital twin architecture, which simulates the network environment using real-world client analytics to model behavior, evaluate security changes, and assess configuration adjustments. The goal is to provide IT managers with tools for timely and accurate decisions.


Linux in your car: Red Hat’s milestone collaboration with exida

Red Hat’s collaboration with exida marks a significant milestone. While it may not be obvious to all of us, Linux is playing an increasingly important role in the automotive industry. In fact, even the car you’re driving today could be using Linux in some capacity. Linux is very well known and appreciated in the automotive industry with increasing attention being paid both to its reliability and its security. The phrase “open source for the open road” is now being used to describe the inevitable fit between the character of Linux and the need for highly customizable code in all sorts of automotive equipment. The safety of vehicles that get us from one place to another on a nearly daily basis has become a serious priority. ... Their focus on ensuring the safety of both individual components and the operating system as a whole is crucial. This latest achievement brings them even closer to realizing the first continuously-certified in-vehicle Linux Red Hat In-Vehicle Operating System. Their open source first approach to the organization, culture and thought process is an exemplary superset of what exida regards as a best practice for world-class safety culture. 


How CIOs Can Integrate AI Among Employees Without Impacting DEI

As technology adoption accelerates, employees risk falling behind in adapting to meet enterprise demands. This trend has been evident across computing eras, from PCs to the current AI and Internet of Things era. Each phase widens the gap between technology introduction and employees’ ability to use it effectively. ... To prioritize DEI in addressing employee upskilling to leverage AI, CIOs can embrace a spectrum of initiatives, from establishing peer mentorship programs to providing access to online courses, workshops, and conferences. The aim is to promote educational opportunities for those most at risk of falling behind, which will increase the cost risk in the future due to the extra cost of retraining staff or seeking new talent. To successfully link digital dexterity to DEI to prepare employees, CIOs should implement a training program that equitably exposes all workforce segments to AI and the machine economy to develop soft and technical skills. Shift the focus of AI adoption away from solely business needs and focus on individual empowerment


What is a CAIO — and what should they know?

CAIOs and others tasked with overseeing AI deployments play an essential role in “shaping an organization’s strategic, informed and responsible use of AI,” he said. “There are many responsibilities baked into the role, but at its core, it’s about steering the direction of AI initiatives and innovation to align with company goals. AI leads must also create a culture of collaboration and continuous learning.” ... While CAIOs might not always be seated at the C-suite table, those who are there are keenly focused on genAI and its potential to drive efficiencies and profits. Without an executive guiding those deployments, achieving the performance and ROI organizations seek will be tough, she said. “It’s hard to imagine how pieces come together and how you’d bring together so many players,” Kosar said, noting that PwC has more than a dozen different LLMs running internally to power AI tools and products in virtually every business unit. “You have to have the ability to do short-term and long-term planning and balance the two and stay focused on innovation,” she continued. “At the same time, you need to recognize the pace of change while not getting distracted by the latest shiny object.”


How AI is impacting data governance

Every organization needs to establish policies around the handling of its data—informed by federal, state, industry, and international regulations as well as internal business rules. In larger enterprises, a data governance committee sets those policies and specifies how they should be followed in a living document that evolves as regulations and procedures change. The natural language capabilities of generative AI can pop out first drafts of that documentation and make subsequent changes much less onerous. By analyzing data usage patterns, regulatory requirements, and internal workflows, AI can help organizations define and enforce data retention policies and automatically identify data that has reached the end of its useful life. ... AI-powered disaster recovery systems can help organizations develop sound recovery strategies by predicting potential failure scenarios and establishing preventive measures to minimize downtime and data loss. Backup systems infused with AI can ensure the integrity of backups and, when disaster strikes, automatically initiate recovery procedures to restore lost or corrupted data.


The impact of compliance technology on small FinTech firms

However, smaller firms often struggle to adapt quickly due to resource constraints, leading to a more reactive compliance management approach. For smaller firms, running on thin resources could mean higher risks. Many operate with minimal compliance staff or assign compliance duties to employees who juggle multiple roles. This can stretch employees too thin, making it tough to keep up with regulatory changes or manage conflicts of interest that might jeopardize the firm. The use of basic tools like spreadsheets and emails increases the risk of missing important updates or failing to adequately address identified risks due to the lack of clear ownership and effective action plans. Furthermore, regulatory penalties can disproportionately impact smaller firms that lack the financial buffer to absorb significant fines. The ever-evolving regulatory landscape poses an ongoing risk to compliance. Smaller firms must navigate a vast array of compliance policies and procedures. Even those with dedicated compliance or legal experts face the challenge of sifting through extensive documentation to identify relevant changes. 


Revolutionising firms’ security with SASE

For Indian companies, today is an opportune time to have a well-thought long-term SASE strategy and identify short-term consolidation tactics to achieve your desired SASE model. There may be a change required in the firm’s IT culture to adopt integrated networking and security teams, which involves a shift from silo ways of working to shared control. Because no two SASE journeys are the same, therefore, it is up to enterprises to prepare differently and plan for different or customized outcomes. And the first step to doing so is selecting a trusted partner to help in the assessment of your network and security roadmaps against SASE as the reference architecture. Just as significant as the delivery and operational components of SASE, is having a partner who understands innovation and agility, with an eye towards the future. The partner should be able to assist in technology evaluation, establish proof of value, and recommend adaptations to integrate SASE components – all of which go toward laying the foundation for the firm’s security and network roadmaps. Firms should know that when it comes to executing SASE, it isn’t just done and dusted but a multi-disciplinary project with moving parts.


The Next Phase of the Fintech Revolution: Inside the Disruption and the Challenges Facing Banking

The thing that’s causing the most waves right now, frankly, is the regulators. We had evolved to this architecture where you had fintechs doing their thing. You had sponsor banks of various types underneath who were actually bearing the regulatory burden and holding the cash — things that only banks can really do. And then you had these middleware companies that are generically kind of known as banking as a service companies (BaaS). That architecture, which underpins much of the payments, lending and banking innovation that we’ve seen, has now been called into question by regulators and is being litigated ... The most important theme right now is the implications of generative AI for financial services and, not least of all, retail banking. What’s being funded right now are basically vendors. So, this new crop of technology companies is springing up to serve banks and financial institutions more generally and help them with digital transformation as it relates to generative AI. So, you could think of chatbot companies as being probably the most advanced wedge on this and customer service generally as a way to introduce generative AI, lower OpEx and create more customer delight.


Data Governance and AI Governance: Where Do They Intersect?

AI governance needs to cover the contents of the data fed to and retrieved through AI, in addition to considering the level of AI intelligence. Doing so addresses issues like biases, privacy, use of intellectual property, and misuse of the technology. Consequently, AIG needs to guide what subject matter can be processed through AI, when, and in what contexts. ... AIG and DG share common responsibilities in guiding data as a product that AI systems create and consume, despite their differences. Both governance programs evaluate data integration, quality, security, privacy, and accessibility. ... The data governance team audits the product data pipeline and finds inconsistent data standards and missing attributes feeding into the AI model. However, the AI governance team also identifies opportunities to enhance the recommendation algorithm’s logic for weighting customer preferences. The retailer could resolve the data quality issues through DG while AIG improved the AI model’s mechanics by taking a collaborative approach with both data governance and AI governance perspectives. 


Enhancing security through collaboration with the open-source community

Without funding, it is difficult for open-source projects to get official certifications. So, companies in regulated sectors that need those certifications often can’t use open-source solutions. For the rest, open-source really has “eaten the world.” Most modern tech companies wouldn’t exist without open-source tools, or would have drastically different offerings. ... Too many just download the open-source project and run away. One way for corporate entities to get involved is by contributing bug fixes and small features. This can be done through anonymous email accounts if it’s necessary to keep the company’s involvement private. Companies should also use the results of their security analysis to help improve the original project. There is some self-interest involved here. Why should a company use its resources to maintain proprietary patches for an open-source project when it can instead send those patches back and have the community maintain them for free? Google has been doing a good job of this with their OSS-FUZZ project. It has found many bugs and helped a large number of the open-source projects using it.



Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie