Daily Tech Digest - October 13, 2023

Meet the New DevSecOps

AI-powered DevSecOps for the enterprise merges AI capabilities that evolved separately until now:AI-powered software delivery workflows enable software delivery processes to automate large-scale programs like app modernization and cloud migrations. AI-based code governance helps developers use code-assist and other generative AI tools to speed up the writing, checking and optimizing of traditional code. Predictive intelligence applies machine learning algorithms to data across the entire software development and delivery life cycle (SDLC) so managers gain earlier software delivery insights in order to forecast capacity, foresee risks and respond to changes. The reality is that when AI solutions are implemented — often in piecemeal fashion among smaller teams — they add to the clutter of siloed and fragmented tools, methods and processes that will eventually bite back on the short-lived perception of “progress.” A truly systemic approach to DevSecOps that merges and leverages AI capabilities at scale is one of the most important adjustments an enterprise can make for a lasting advantage in today’s AI-augmented world.


Navigating Data Management At a Strategic & Tactical Level

In Mexico, the biggest challenge is determining the part of the data management strategy to focus on first. Some data integration strategies seek to purchase technology in the hope that it will solve all their problems. However, it is essential to first organize the data before attempting to solve everything in a single move or through a single technology. Before developing a Master Data Management (MDM) strategy, companies must first sit down and identify what they want from it. The data strategy should come from the top of the organization because it will require a cultural shift to implement it correctly. ... Having technological expertise alone is not enough to help the client capitalize on the collected data. Alldatum focuses on the human side of technology, which is sometimes overlooked. We want to help clients extract value from information because if they succeed in their data strategies, their company will grow and they will make better decisions. If they make better decisions, there will be more job opportunities, which will positively impact the country's economy.


The impact of artificial intelligence on software development? Still unclear

It's common for experts to suggest these days that AI will deliver significant boosts to software development and deployment productivity, along with developer job satisfaction. "So far our survey evidence doesn't support this," the report's authors, Derek DeBellis and Nathen Harvey, both with Google, state. "Our evidence suggests that AI slightly improves individual well-being measures -- such as burnout and job satisfaction -- but has a neutral or perhaps negative effect on group-level outcomes such as team performance and software delivery performance." These flat findings are likely due to the fact that we're still at the early stages of AI adoption, they surmise: "There is a lot of enthusiasm about the potential of AI development tools, as demonstrated by the majority of people incorporating at least some AI into the tasks we asked about. But we anticipate that it will take some time for AI-powered tools to come into widespread and coordinated use in the industry."


Quantum risk is real now: How to navigate the evolving data harvesting threat

The real emphasis of HNDL threats is on high-value, long-term data assets like trade secrets or intellectual property, which are passively harvested from large-scale data access points rather than personal WiFi hotspots. In essence, if a device is likely to possess important actionable information of near-term value, it’s more likely to be attacked immediately rather than be subjected to a longer-term HNDL strategy. Given the sensitive nature of the data at stake — from personal information to state secrets — the HNDL risk poses a severe threat. ... Understanding quantum security is essential in mitigating the risk of HNDL attacks. Once asymmetric encryption, which is currently not quantum-safe, is broken, session keys and symmetric keys will be exposed. Therefore, mitigation involves either using quantum-secure encryption or eliminating the transmission of encryption keys altogether. It’s essential to clear up a common misconception: While Advanced Encryption Standard (AES) is often touted as quantum-safe, the security of AES often hinges on the RSA mechanism — a type of asymmetric encryption — used to distribute its keys, which is not quantum-safe.


What is a data architect? Skills, salaries, and how to become a data framework master

Data architects are senior visionaries who translate business requirements into technology requirements and define data standards and principles, often in support of data or digital transformations. The data architect is responsible for visualizing and designing an organization’s enterprise data management framework. This framework describes the processes used to plan, specify, enable, create, acquire, maintain, use, archive, retrieve, control, and purge data. The data architect also “provides a standard common business vocabulary, expresses strategic requirements, outlines high-level integrated designs to meet those requirements, and aligns with enterprise strategy and related business architecture,” according to DAMA International’s Data Management Body of Knowledge. ... The data architect and data engineer roles are closely related. In some ways, the data architect is an advanced data engineer. Data architects and data engineers work together to visualize and build the enterprise data management framework. The data architect is responsible for visualizing the “blueprint” of the complete framework that data engineers then build.


How Agile Teams Can Improve Predictability by Measuring Stability

The stability metric, Ψ, is a really simple calculation that can be done on the back of an envelope. It has two inputs: the arrival rate, λ, and the service rate, μ. The arrival rate is the number of PBIs added to a system in a period of time. The service rate, μ, is the number of PBIs successfully done by the team in the same period of time. With these two inputs you can calculate the dimensionless Ψ by just dividing the service rate by the arrival rate. When Ψ is less than one, the system is unstable; when it is greater than one, it is stable, and when it is equal to one it is optimally stable. When Ψ is equal to one, the average arrival rate is equal to the average service rate and Little’s law applies. In this state, the backlog is neither growing nor shrinking over time and the average time an item will spend before it is done can be calculated by dividing the total number of items in the system, L, by the arrival rate, λ. When Ψ is less than one, then items are arriving faster than they can be dealt with and the backlog is growing.


DarkGate Operator Uses Skype, Teams Messages to Distribute Malware

Trend Micro's analysis showed that once DarkGate is installed on a system, it drops additional payloads. Sometimes those are variants of DarkGate itself or of Remcos, a remote access Trojan (RAT) that attackers previously haveused for cyber-espionage surveillance and for stealing tax-related information. Trend Micro said it was able to contain the DarkGate attacks it observed before any actual harm came to pass. But given the developer's apparent pivot to a new malware leasing model, enterprise security teams can expect more attacks from varied threat actors. The objectives of these adversaries could vary, meaning organizations need to keep an eye out for threat actors using DarkGate to infect systems with different kinds of malware. While the attacks that Trend Micro observed targeted individual Skype and Teams recipients, the attacker's goal clearly was to use their systems as an initial foothold on the target organization's networks. "The goal is still to penetrate the whole environment, and depending on the threat group that bought or leased the DarkGate variant used, the threats can vary from ransomware to cryptomining," according to Trend Micro.


Managing a Freelance Data Science Team

Freelance data scientists are a unique breed. They combine the technical skills of a data scientist with the entrepreneurial mindset of a freelancer. They are highly self-motivated, disciplined, and adaptable, able to navigate the uncertainties of freelance work while maintaining a high level of professional expertise. In terms of technical skills, freelance data scientists typically have advanced degrees in fields like statistics, computer science, or data science, and have a deep understanding of machine learning algorithms, statistical modeling, data visualization, and programming languages like Python and R. They are also adept at using data science tools and platforms like Hadoop, Spark, and Tableau. However, what sets freelance data scientists apart is their ability to operate independently. They are comfortable with remote work, adept at managing their time, and capable of maintaining strong relationships with clients. They are also often more up-to-date with the latest industry trends and technologies, as they need to constantly upskill to remain competitive.


Researchers: The Future of AI Is Wide, Deep, and Large

In their research, the team considered the relationship between deep and wide neural networks. Using quantitative analysis, they found that deep and wide networks can be converted back and forth on a continuum. Using both will give a bigger picture and avoid bias. Their research hints at the future of machine learning, in which networks are both deep and wide and interconnected with favorable dynamics and optimized ratios between width and depth. Networks will become increasingly complicated, and when dynamics reach the desired states, they will produce amazing outcomes. “It’s like playing with LEGO bricks,” said Wang. “You can build a very tall skyscraper or you can build a flat large building with many rooms on the same level. With networks, the number of neurons and their interconnection are the most important. In 3D space, neurons can be arranged in myriad ways. It’s just like the structure of our brains. The neurons just need to be interconnected in various ways to facilitate diverse tasks.”


Why we need to focus AI on relationships

Granted, you also need to onboard the AI to make sure employees trust the tool and will use it. This can be problematic, given that many employees fear that AI will replace them. A Harvard Business Review article provided guidance on how to do this properly, suggesting that the AI be set up to succeed following known successful procedures first and then advancing the interaction as the employee becomes comfortable with the tool. In other words, you start by making the AI into an assistant that enhances and helps the employee, then allow it to become a monitor/mentor as it provides real-time feedback on how the employee is interacting with others in the company. The next phase is the coach, where the AI becomes proactive and able to provide more detailed feedback at times of the employee’s choosing, with the final phase being that the AI becomes a teammate, able to autonomously do things on behalf of the human/AI team. During this evolution of the personal AI, the employee trains the AI and the AI trains the employee, so they become two parts of a more productive team.



Quote for the day:

"Leadership is absolutely about inspiring action, but it is also about guarding against mis-action." -- Simon Sinek

Daily Tech Digest - October 12, 2023

Bridging the AI-Human Divide: AI as Your Operations Teammate

When viewed through the lens of collaboration rather than competition, AI should free people to do what they are uniquely good at — creative problem-solving, collaborating and using judgment to solve complex challenges. This “human-in-the-loop” approach to AI can alleviate people from daily-grind tasks that are time-consuming, repetitive, and often lead to burnout. Envision a scenario where your AI teammate seamlessly integrates into your operational workflows, becoming a trusted assistant who handles time-consuming tasks. This teammate has a comprehensive understanding of your operations, understands the contextual importance of data and can deliver that data exactly when needed in the forms of metrics, graphs and recommended actions. Imagine an AI teammate that collects and sorts a massive amount of data, producing concise summaries for human analysis. Or AI that provides you with information that detects and contextualizes incidents. As potent as AI is in dissecting vast data sets, spotting patterns, and rendering contextual analysis, it’s still in its infancy. 


Chatbots in the future of customer service

Customers often feel like they must jump over hurdles, trying different phrase combinations to explain what they need to get a rules-based chatbot — which is typically defined with existing mapped-out responses — to understand their request. Generative AI addresses this pain point, by continuously training and optimising chatbots to deliver a far more sophisticated, personalised level of customer support. Through conversational interactions, the technology can provide intelligent support for faster issue resolution while increasing self-solve rates to better streamline processes for the human agents. Generative AI’s ability to formulate responses based on historical insights, such as past behaviour and user profile problems, is a key differentiator for business leaders looking to win customer loyalty. Considering the majority of customers say it’s important organisations understand them, with 65 per cent wanting agents to resolve problems easily, it’s an impossible demand to meet without integrating intelligent solutions. 


Treat generative AI like a burning platform and secure it now

“As generative AI proliferates over the next six to 12 months, experts expect new intrusion attacks to exploit scale, speed, sophistication, and precision, with constant new threats on the horizon,” wrote Chris McCurdy, worldwide vice president & general manager with IBM Security in a blog about the study. For network and security teams, challenges could include having to battle the large volumes of spam and phishing emails generative AI can create; watching for denial-of-service attacks by those large traffic volumes; and having to look for new malware that is more difficult to detect and remove than traditional malware. “When considering both likelihood and potential impact, autonomous attacks launched in mass volume stand out as the greatest risk. However, executives expect hackers faking or impersonating trusted users to have the greatest impact on the business, followed closely by the creation of malicious code,” McCurdy stated. There’s a disconnect between organizations’ understanding of generative AI cybersecurity needs and their implementation of cybersecurity measures, IBM found.


Technical debt has hindered UK innovation

With technical debt reaching an extreme tipping point, it is important to learn from how we got here. As businesses are driven to provide better and more customized solutions for their customers, they are competing for a limited pool of developers who can help them navigate complex IT infrastructure and operations. The research found that the two leading factors behind technical debt were the high number of development languages, which makes it difficult to maintain and upgrade systems, as well as steep turnover in development teams. This results in new hires becoming responsible for platforms they did not create and may not fully understand. Other significant factors include companies accepting known defects in order to meet deadlines, and the presence of outdated development languages and frameworks. As a result, businesses struggle to maintain and rework critical systems. Over time, technical debt builds up and compounds through thousands of seemingly small decisions, before becoming a major problem preventing companies from investing in new innovations or services.


How ChatGPT and other AI tools could disrupt scientific publishing

Many editors are concerned that generative AI could be used to more easily produce fake but convincing articles. Companies that create and sell manuscripts or authorship positions to researchers who want to boost their publishing output, known as paper mills, could stand to profit. A spokesperson for Science told Nature that LLMs such as ChatGPT could exacerbate the paper-mill problem. One response to these concerns might be for some journals to bolster their approaches to verify that authors are genuine and have done the research they are submitting. “It’s going to be important for journals to understand whether or not somebody actually did the thing they are claiming,” says Wachter. At the publisher EMBO Press in Heidelberg, Germany, authors must use only verifiable institutional e-mail addresses for submissions, and editorial staff meet with authors and referees in video calls, says Bernd Pulverer, head of scientific publications there. But he adds that research institutions and funders also need to monitor the output of their staff and grant recipients more closely.


Israel-Hamas conflict extends to cyberspace

There have also been cyberattacks targeting Palestine by an India-based hacktivist group called Indian Cyber Force. The group has shown solidarity with Israel in the current conflict and has taken responsibility for bringing down the websites of Hamas, Palestine National Bank, Palestine Web Mail Government Services, and Palestine Telecommunication Company. "Indian Cyber Force has previously initiated several cyber campaigns in support of India. Their previous targets include Bangladesh and Canada. It appears that Bangladesh was targeted regarding their relationship with Pakistan," Flashpoint said in a report. ... India's open support for Israel in the ongoing conflict has also dragged the country into this cyber warfare. Several hacktivist groups objected to India's support for Israel in the current conflict and its departure from a traditional neutral stance in the Israel-Palestine conflict. Hacktivist group Ghosts of Palestine posted a message on the Telegram channel claiming to target India. The message clearly highlighted that the cause of the attacks was India's support for Israel.


How To Improve Your Data Pipeline Security Credit Score

What do we mean when discussing security tech debt in your data pipeline? This refers to building a data pipeline that doesn't have the scalability to govern data access or secure data confidently across all of your target databases. This can take a couple of forms: - Data teams or line of business leaders are so incentivized to get the data into the cloud for data insights that they build a pipeline to move all of the data without regard to which data is sensitive and should be governed according to privacy regulations. This lets companies use the data, but it's the riskiest. - Security teams get wind of this and slam the brakes on the data teams' plans to move data. Maybe data teams can move the easy or safe data so they go forward without sensitive data to keep the project moving. However, suppose they haven't identified all of the places where sensitive data exists across their data sources. In that case, the security team might decide it's safer to migrate nothing at all, locking everything down under their existing security controls.


IT Leaders in Banking & Finance Prepare for Business in the Metaverse

Several banks are setting up lounges or virtual branches as an entry point to the metaverse and using the space to establish a presence and nurture customer relationships. Offering education, support, and advice on financial products in the metaverse can enable financial services brands to engage Gen Z even as VR banking matures. HSBC, for example, purchased virtual real estate in The Sandbox to engage and connect with sports, e-sports, and gaming enthusiasts. Is this the right idea? IT leaders attending The Future of Banking event had mixed feelings regarding virtual banking services. They expressed skepticism about the likelihood of adoption without a specific incarnation of virtual offerings that fires the customer’s imagination. Banks will need to give customers compelling reasons to go to the metaverse to complete actions they can already do with mobile banking applications or develop actions they cannot experience with mobile or web interfaces. The next biggest hurdle will be understanding what that will look like across the industry.


TCS builds the first digital heart for a professional runner

A digital heart is a high-fidelity multiscale computational model of the cardiac system. It enables insight into myocardium mechanisms, subcellular mechanisms, electromechanical activation (generation and conduction of cardiac electrical potential leading to cardiac muscle contraction), and hemodynamics (valvular functions, chamber pressures, myocardial wall tension, and coronary blood flow) on both the micro and macro scales. TCS creates a digital heart of a number of data sets, including an MRI. With the data from an MRI alongside various historical and speculative data sets, a functioning heart is modeled in a virtual environment. By applying AI/ML and other analysis, users can see the impact of different conditions and situations such as beginning a long-term exercise program or effect of medication. After a digital twin of a heart is created, researchers can go a step further and use 3D printing to create a physical version of a heart. A 3D printed digital twin heart allows a doctor to practice surgical techniques and test solutions such as new heart valves or drugs without ever touching an actual body.


Keeping up with the demands of the cyber insurance market

While insurers are bringing new products to the market, they are increasingly tightening the requirements for prospective and existing policy holders for the cyber risks they underwrite, asking organizations to demonstrate a high level of security preparedness to gain coverage. In this scenario, thorough planning ahead of the application process ensures that organizations are in the best position to get coverage and reap the benefits of their policy. So, what are the priorities and the key security factors at play to ensure organizations can improve their chances of qualifying? ... Additionally, insurers pay close attention to incident response plans, anticipating a robust strategy that aligns IT, security, and developers for a swift, effective reaction to cyber threats. Devising thorough plans, with role checklists and response measures, and organizing regular simulation exercises will enhance organizations’ incident readiness and show insurers that they are genuinely prepared. Finally, post-attack recovery plans also play a significant role in coverage viability. 



Quote for the day:

“Nobody talks of entrepreneurship as survival, but that’s exactly what it is.” -- Anita Roddick

Daily Tech Digest - October 11, 2023

The CIO at a crossroads: Evolve or become a dead-end job

While the CIO role is undoubtedly changing, no business can afford to let their staff go out and buy whatever technology they want. The potential risks of leaving professionals to their own devices range from burgeoning costs in terms of cloud provision to the fear of sensitive enterprise data being pushed into public AI systems without due care and attention. Businesses need someone to ensure advanced digital technologies are exploited in a safe, secure, and cost-effective manner. And the person within the enterprise who holds that experience is still the CIO, says Richardson. “While things are now much more advanced, that core role — ensuring reliable, efficient, and secure business operations — is still crucially important,” he says. “There is certainly a very wide scope of functional and technical disciplines for modern CIOs to understand, such as cybersecurity, cloud infrastructure, AI and machine learning, end-user experience design, enterprise architecture, and more.” That’s a belief that chimes with Lily Haake, head of technology and digital executive search at recruiter Harvey Nash.


Victory by Surveillance Isn’t Possible—To Win, Engage the Adversary

Despite the profound intelligence exhibited by tools and methodologies originating from academia or the minds of Silicon Valley innovators, they inherently maintain a passive stance. More intelligent surveillance simply will not get the job done. Moreover, our faith in AI-based solutions may well be (dramatically) overstated. Like in any arms race, the enemy gains access to the same tools and techniques used in defense for the benefit of their offense. This fact does not make for any “we are pulling ahead” kind of thinking. As AI-designed offensive techniques come online, we might in fact be further behind, which is a terrible thing to say after laying out US$200 billion for security tools last year. ... When each new tool we innovate adds to our burden as defenders, but doesn’t, realistically, alter the awful trends, we need a rethink. Basic zero-sum game theory—my opponent’s gain is at my expense, and vice versa—ensures we will stay on the receiving end unless there is a cost associated with the attacker’s behavior.


Why are OpenAI, Microsoft and others looking to make their own chips?

“The obvious point is that they have some requirement nobody is serving, and I reckon it might be an inference part that’s cheaper to buy and cheaper to run than a big GPU, or even the top Sapphire Rapids CPUs, without making them beholden to either AWS or Google,” according to Omdia principal analyst Alexander Harrowell. He added that he was basing his opinion on CEO Sam Altman’s comments that GPT-4 is unlikely to scale further, and would rather need enhancing. Scaling an LLM requires more compute power when compared to inferencing a model. Inferencing is the process of using a trained LLM to generate more accurate predictions or results. Further, analysts said that acquiring a large chip designer might not be a sound decision for OpenAI as it would approximately cost around $100 million to design and get the chips ready for production. “While OpenAI can try and raise money from the market for the effort, the deal with Microsoft earlier this year essentially led to selling an option over half the company for $10 billion, of which some unspecified proportion is in non-cash Azure credits — not the move of a company that’s rolling in cash,” Harrowell said.


It’s Time to End the Battle Between Waterfall and Agile

Hybrid methodologies, such as the one implemented by Philips for their digital transformation initiatives, offer a mix of Agile’s flexibility and Waterfall’s structure. Philips adopted a hybrid approach for its HealthSuite digital platform, delivering rapid, iterative releases for software development while still adhering to strict documentation and safety guidelines. By combining these two approaches, Philips was able to create a hybrid approach that was both flexible and structured. This resulted in better product quality, reduced time to market, predictable costs and savings. ... A hybrid approach allows for risk mitigation by blending Agile’s adaptability with Waterfall’s structured planning, as demonstrated by Tesla’s hybrid approach to the development of their Model 3. To build the Gigafactory, where the vehicle’s batteries are produced, Tesla utilized rigorous planning and risk assessment methods. At the same time, Tesla’s capability to update vehicle software over the air allows for rapid issue resolution and feature addition post-production. This dual strategy enables Tesla to mitigate risks effectively while maintaining flexibility in a high-stakes manufacturing landscape.


5 Focus Areas for Better Cloud Security Programs in Financial Services

Finserv organizations need a strategy for discovering employees’ usage of cloud apps or services that haven’t been authorized by the IT department – also known as Shadow IT – that may lead to unsecured data. One popular method to monitor Shadow IT usage is a Cloud Access Security Broker (CASB): an intermediary between cloud consumers and providers that enforces security policies as cloud resources are accessed. Secure web gateways (SWG) and next-generation firewalls are other helpful tools used to inspect network traffic and provide advanced protection. But it’s not enough to just take stock of Shadow IT – organizations also need a plan for how to secure any unauthorized apps or services they discover. ... Finserv firms store an average of 61% of sensitive data in the public cloud – equal to other sectors. They also store similar types of vital data, but even more so in the way of competitor data, confidential internal documents, personal staff information, intellectual property, government identification, payment card information and network passwords. 


F5 Warns Australian IT of Social Engineering Risk Escalation Due to Generative AI

Australian IT teams can expect to be on the receiving end of social engineering attack growth. F5 said the main counter to changing bad actor techniques and capabilities will be education to ensure employees are made aware of increasing attack sophistication due to AI. “Scams that trick employees into doing something — like downloading a new version of a corporate VPN client or tricking accounts payable to pay some nonexistent merchant — will continue to happen,” Woods said. “They will be more persuasive and increase in volume.” Woods added that organizations will need to ensure protocols are put in place, similar to existing financial controls in an enterprise, to guard against criminals’ growing persuasive power. This could include measures such as payments over a certain amount requiring multiple people to approve. ... There have been warnings that armies of bots, supercharged by new AI tools, could be utilized by criminal organizations to launch more sophisticated automated attacks against enterprise cybersecurity defences, expanding a new front in organisations’ war against cyber criminals.


Translating Failures into Service-Level Objectives

We can be proactive about failure and creating SLOs from chaos engineering and game days. Chaos engineering is the discipline of experimenting on a system to build confidence in the system’s capability to withstand turbulent conditions in production. We want to inject failure into our systems to see how it would react if this failure were to happen on its own. This allows us to learn from failure, document it and prepare for failures like it. We can start practicing these types of experiments with game days. A game day is a time when your team or organization comes together to do chaos engineering. This can look different for each organization, depending on the organization’s maturity and architecture. These can be in the form of tabletop exercises, open source/internal tooling such as Chaos Monkey or LitmusChaos, or vendors like Harness Chaos Engineering or Gremlin. No matter how you go about starting this practice, you can get comfortable with failure and continue to build a culture of embracing failure at your organization. This also allows you to continue checking on those SLOs we just set up.


Turning military veterans into cybersecurity experts

Cyber threat intelligence (CTI) is a specialism within cybersecurity that has been built upon traditional military intelligence processes and theories and because of this, any who have served in a traditional intelligence role in the military will have a solid coverage of some of the hard skills needed for these roles. With support upskilling, they build their knowledge of IT networks and some CTI tooling and platforms, so this career path can be very accessible. Information security management careers also require people to know how to manage and lead, which is to manage people more often than tech. While a grounding in the technical aspects of cybersecurity is very important, ex-Forces commonly possess management and leadership skills and experience, often developed over years during their military careers, which is evidently useful the moment they step into a cyber team. To enhance this further, most have worked with sensitive data, stored and processed on sensitive systems, shaping or at least adhering to policy, and sometimes even managing large IT accounts.


6 Pain Points for CISOs and CIOs and What to Do About Them

“Everybody is struggling with the sprawling technology stack,” says Carl Froggett, CIO of cybersecurity company Deep Instinct. Many companies are working with a mix of legacy technology, like on-premises servers, and new cloud and SaaS systems. CIOs and CISOs are faced with the operational and security challenges that come with this disparate tech stack and the migration to new systems. With that sprawl comes the challenge of data governance. What data does a company have? Where does it reside? How can it be safeguarded? If CIOs and CISOs can’t answer the first two questions, they can’t even begin to collaborate on an effective strategy for protecting their organizations’ data. ... While talent is a scarce commodity, CIOs and CISOs can leverage third parties to get the skills they do not have internally and have yet to hire. They can also find ways to automate lower-level tasks, freeing staff to spend more time on other more important, less repetitive tasks. IT leadership can also retrain and upskill existing team members.


Using Visual Studio Code for C# development

The C# Dev Kit adds more in the way of code navigation tooling, using the Solution Explorer to work with test frameworks and the Roslyn tools to quickly jump to specific parts of your application, peeking at definitions and references to understand how classes and methods are used. The Solution Explorer helps manage complex projects, using virtual solution folders to group files without affecting your underlying file system. Solution folders let you separate code from tests, as well as managing different UIs for different device targets. The IntelliCode extension adds AI-supported code completion to your editor, with the ability to predict entire lines of code, based on what you’ve already written. This works alongside the normal IntelliSense features to guide code predictions, reducing the risk of errors. It will even highlight possible completions in IntelliSense and rank the members in a class based on your code to speed up selections. It’s important to understand that this is a local AI model. Unlike GitHub Copilot, IntelliCode operates disconnected from the internet, helping keep code secret and enabling you to work from anywhere.



Quote for the day:

“Identify your problems but give your power and energy to solutions.” -- Tony Robbins

Daily Tech Digest - October 10, 2023

Crafting Leaders: The finishing touches

The process of narrowing the funnel for identifying future leaders must commence soon after fresh talent is inducted within the organization and certainly long before organizational knocks have bled the spirit, energy and desire-to-be-different from these young men and women. An earlier column explained how alternative fast-track schemes function and ways to choose and groom future leaders from early stages. 2 More recently, I have added two coda to the exposition. When choosing leaders for facing the uncertainties of tomorrow it is not enough to capture their capabilities at the time of selection but take into account the steepness of the slope they have traversed to reach there. 3 That is the best guarantee of future resilience and continued development in spite of handicaps. Moreover, constraints of time and shortage of the right kind of teachers prevent those running to the top of the pyramid from formally refreshing their knowledge and capabilities as frequently as they should. ... The grooming of Fast-Trackers (FTers) must vary substantially from company to company and from individual to individual.


The undeniable benefits of making cyber resiliency the new standard

"It's about practicing due care and due diligence from a cybersecurity standpoint and having a layered defense with a layered people-process-and-technology-driven program with the right governance and services and tools to enable the mission of the organization so that if there's an event, you can recover and adapt to keep business running," he adds. To do that, CISOs and their executive colleagues must have their cybersecurity basics well established -- basics such as knowing their tolerance for risk, understanding their IT environment, their security controls, their vulnerabilities, and how those all could impact the organization's operations. CISOs aren't limited to these frameworks or the assessment tools created specifically to measure cyber resiliency, says Tenreiro de Magalhaes and others. CISOs can also run tabletop drills and red-team exercises to test, measure and report on resiliency. Repeating such drills and exercises can then track whether the organization's cybersecurity program as well as specific additions to it help improve resiliency over time, experts say.


Hybrid work is in trouble. Here are 4 ways to make it work in the longer term

"We're all humans and we work with each other," he says. "To make hybrid working effective, there must be an element of interaction. There must be a connectivity, both to the business and your team." Warne says balance is essential, so find the right reasons for bringing people together in the office. "At River Island, it's about making sure that people are in for a purpose and not just presenteeism, and making sure that the people who need to work together are able to work together," he says. "If you work with a colleague, it's crucial you don't have a situation where one of you comes into the office and the other one works from home." Warne says his team doesn't have mandated days in the office. Instead, his organization's hybrid-working strategy is all about collaboration. ... However, hybrid working has allowed for an even higher level of flexibility in her organization -- and the key to success has been constant communication. Cousineau continues to listen to feedback from her team. One staff member suggested hybrid all-team meetings were creating a big divide between those who were present and those who weren't.


Evolution of stronger cyber threat actors: The flip side of Gen AI story

Deepfake technology, a subset of Generative AI, allows threat actors to create convincing video and audio forgeries. This presents a substantial threat to organisations as deepfake attacks can tarnish reputations, manipulate public opinion, and even influence financial markets. Imagine a scenario where a CEO’s voice is convincingly mimicked, disseminating false information that impacts stock prices; or consider a deepfake video of a prominent figure endorsing a product or idea they never actually supported. Such manipulations can lead to severe consequences for businesses and society at large. Generative AI is revolutionising the way malware is created. Threat actors can use AI algorithms to generate highly evasive and adaptable malware variants that can easily evade traditional signature-based antivirus solutions. These AI-generated malware strains constantly evolve, making detection and containment a significant challenge for cybersecurity professionals. Moreover, Generative AI allows for the customisation of malware based on the target environment. 


The CIO’s primary job: Developing future IT leaders

The challenge for IT management is to find people who are good at their current job but are also interested in the management side that is necessary for departmental success. In my opinion, the reason many IT departments have decided to go outside IT to bring in CIOs is because IT has not fostered the kind of environment that develops these types of professionals. IT has not traditionally tried very hard to develop strong managers from within. Most people learn to manage by watching what their managers do. And if people have bad managers, the results can be less than optimum. So how do we change that conundrum? First, we must commit our current managers and supervisors to a strong management training program. Once they have been trained in the subtleties of management, then we hopefully will begin to see new managers with skills developed from within. Effective management training can, and should be, structured around techniques that current managers use to be successful. Delegating effectively and encouraging career growth among staff are two examples.


Evolution of Data Partitioning: Traditional vs. Modern Data Lakes

In modern data lakes, data is organized into logical partitions based on specific attributes or criteria, such as day, hour, year, or region. Each partition acts as a subset of the data, making it easier to manage, query, and optimize data retrieval. Partitioning enhances both data organization and query performance. Instead of relying solely on directory-based partitioning or basic column-based partitioning, these systems provide support for complex, nested, and multi-level partitioning structures. This means that data can be partitioned using multiple attributes simultaneously, allowing for highly efficient data pruning during queries. ... Snapshots are a fundamental concept used to capture and manage different versions or states of a table at specific points in time. Snapshots are a key feature that enables Time Travel, data auditing, schema evolution, and query consistency within modern Data Lakes like Iceberg tables. Some important features of snapshots are below : Each snapshot represents a specific version of the data table. When you create a snapshot, it essentially freezes the state of the table at the moment the snapshot is taken. 


Will Quantum Computers Become the Next Cyber-Attack Platform?

A quantum cyberattack would likely be similar to today’s identity theft and data breaches. “The only difference is that the damage would be more widespread, since quantum computers could attack a broad class of encryption algorithms rather than just the particular way that a company or data center implements the algorithm, which is how attacks are currently done,” explains Eric Chitambar, associate professor of electrical and computer engineering at the Grainger College of Engineering at the University of Illinois Urbana-Champaign. Chitambar also leads the college’s Quantum Information Group. ... Conducting an enterprise-wide quantum risk assessment to help identify systems that might be most vulnerable to a quantum attack would be a good place to start, Staab says. He also recommends deploying enterprise-wide Quantum Random Number Generator (QRNG) technology to generate quantum-resistant encryption keys. This approach promises crypto agility, implementation of Quantum Key Distribution (QKD) and the development of quantum-resistant algorithms. “As we head toward a quantum computing era, adopting a zero-trust architecture will become more important than ever,” Staab states.


6 Reasons Private LLMs Are Key for Enterprises

Private LLMs can be used with sensitive data — such as hospital patient records or financial data — and then use the power of generative AI to produce groundbreaking achievements in these fields. With the LLM running on your private infrastructure and only exposed to the people who should have access to it, you can build powerful customer-focused applications, chatbots or just provide an easier way for your employees to interact with your company data — without the risk of sending the data to a third party. ... With private LLMs, you can tailor the model and response to your company, industry or customers’ needs. Such specific information is not likely to be included in general or public LLMs. You can feed your LLM with customer support cases, internal knowledge-base articles, sales data, application usage data and so much more, ensuring that the responses you receive are what you’re looking for. ... Controlling versioning or the model you’re using is extremely important because if you change the model that you use to create embeddings, you will need to re-create (or version) all the embeddings you store.


Tech Revolution: The Rise of Automation and Its Impact on Society

To offset potential adverse effects, it is imperative for companies and governments to enact policies and initiatives supporting workers susceptible to automation’s impact. This may encompass training programs designed to furnish workers with the requisite skills to excel in the evolving job market, along with social support programs to aid those grappling with employment challenges. Public policy will emerge as a pivotal determinant of technological evolution’s trajectory and consequences. Economic incentives, education reforms, and immigration policies will directly influence productivity, employment levels, and enhanced economic mobility. ... Central and state government agencies ought to collaborate with industry partners and educational institutions to craft programs that equip new workers with the skills needed to thrive in an automation-driven world. These programs bear the potential to combat emerging inequality by propelling education and training initiatives that foster success for all.


When open source cloud development doesn't play nice

Remember that the cloud provider is merely “providing” the open source software. They are not typically supporting it beyond that. For more, you’ll need to look internally or in other places. Open source users, whether in the cloud or not, often have to rely on community resources, typically provided through forums or message boards, which takes time. This can impede cloud development progress in urgent, time-sensitive scenarios or complex issues. A developer told me once that she needed to attend a meeting of the open source community before she could have a resolution to a specific problem—a meeting that was five weeks out. That won’t work. From a security standpoint, open source software can pose specific challenges. Although a community of developers regularly reviews such software, it can still harbor undetected vulnerabilities, primarily because its code is openly accessible. For instance, some open source supply chain issues arose a few years ago. These vulnerabilities can become severe security threats without stringent security measures and frequent updates. 



Quote for the day:

''Sometimes it takes a good fall to really know where you stand.'' -- Hayley Williams

Daily Tech Digest - October 08, 2023

How AI is enhancing anti-money laundering strategies for improved financial security

Financial institutions collect massive volumes of transactional data daily, making it impractical for human experts to review each transaction for signs of money laundering manually. AI systems, on the other hand, can efficiently process this data, flagging transactions that exhibit unusual patterns or deviate from established norms. These AI systems utilise advanced algorithms to develop customer behavior profiles, creating a baseline against which future transactions can be compared. Any deviation from the norm, such as sudden large transfers, frequent cash deposits, or transactions with high-risk jurisdictions, triggers an alert for further investigation. This allows institutions to focus their resources on genuinely suspicious activities rather than drowning in false positives. Analysing data to recognise suspicious activities: AI algorithms excel at analysing enormous datasets, identifying hidden patterns and correlations that could signify money laundering activities. By examining transaction history and customer behavior, AI-enabled tools can uncover links between seemingly unrelated events.


Record Numbers of Ransomware Victims Named on Leak Sites

At current levels, 2023 is on course to be the biggest year on record for victim naming on so-called ‘name and shame’ sites since this practice began in 2019. It is expected the 10,000th victim name was posted to leak sites in late summer 2023, but this has not yet been confirmed by Secureworks. ... The 2023 report found that ransomware median dwell time was under 24 hours, representing a dramatic fall from 4.5 days during the previous 12 months. In 10% of cases, ransomware was deployed within five hours of initial access. Smith believes this trend is due to improved cyber detection capabilities, with cyber-criminals speeding up their operations to reduce the chances of being stopped before deploying ransomware. “As a result, threat actors are focusing on simpler and quicker to implement operations, rather than big, multi-site enterprise-wide encryption events that are significantly more complex. But the risk from those attacks is still high,” commented Smith.


Cloud backup and disaster recovery evolve toward maturity

At the end of the day, backup as a service is kind of just that. It operates like a regular backup application, using a schedule and point-in-time backups. DRaaS is more about failing over if something comes up as a disaster recovery process. It's designed to replicate or restore data environments automatically; it doesn't transform data in the same sense that a backup may have a particular data format. DRaaS is about moving the data from point A to point B and being able to get back to it as quickly as possible, especially in the context of a failover. ... But with the flexibility that cloud data protection affords, a lot of these solutions can essentially get updated whenever you log on because they're SaaS-based. Also, there's so much data in the cloud now and lots of investment in digital transformation, new platforms and cloud-native applications, which is triggering some rethinking of cloud data protection strategies. All of this I think is shortening the review cycles. It's actually a domino effect: Data protection follows data production. 


Mitigating Security Fatigue: Safeguarding Your Remote Team Against Cyberthreats

It’s easy for remote workers to feel disconnected from their teams and employers, which is why it’s important to keep communication consistent. Having the right collaboration tools can make all the difference in keeping remote workers engaged and more likely to follow security protocols. Video calls can help team members meet face-to-face, reducing miscommunication and misunderstandings. It’s also important to have an easy way to collaborate on projects so everyone can stay on the same page and work moves forward efficiently. Of course, any technology you use should be easy to use and easy to keep secure. With the right communication tools, your remote team members can collaborate effectively, stay connected with team members, and generally remember that they aren’t at home alone — they belong to a larger organization. This feeling of connection will encourage and remind them to implement the company’s security standards even though they work from home. As remote work becomes more popular, the need for strong security practices becomes even more vital. 

One might be inclined to believe (from the Trellix example) that the returns and competitive business risks of adopting and not adopting AI in cyber-security processes are quite high from a sales perspective. This point can be rationalised by seminal academic theory in the strategic management sciences. Based on insights from the widely popular Five Forces strategy model by Michael Porter of the Harvard Business School, the threat of new entrants (Trellix competitors), product substitutes (competitor products churned from AI-driven platforms like HVS), high bargaining power of customers (clients of Trellix-like products), and low bargaining power of suppliers (Trellix) should push enterprises to necessarily adopt AI as a cyber-security strategy to boost sales. ... On top of everything, AI as a business strategy for the modern IT/OT-driven business ecosystems has the potential to adhere very well with certain elements of the seminal Eight-Fold strategy proposed by Michael Cusumano of the MIT Sloan School of Management for software-driven businesses


How to Stay Ahead of the Regulatory Curve with Robust Data Governance?

Establishing a data governance culture requires the right combination of people, process, and technology. Defining the right roles and responsibilities (people) and developing the right data governance framework (process) are steps in the right direction. But without the right tools (technology), it becomes difficult at best for a data governance culture to succeed. A data catalog is a critical tool for organizations looking to establish a data governance culture. It gives business users, many of whom are not data experts, clarity on data definitions, synonyms, and essential business attributes so they can understand and use their data more effectively. Data catalogs show who owns the data, allowing for greater collaboration across the business. They provide a self-service way for everyone in the organization to find the data they need and turn what used to be tribal knowledge into useful and accessible information that they can use to make better business decisions.


Preparing for the Unexpected: A Proactive Approach to Operational Resilience

No firm can achieve operational resilience purely on its own. Intelligence sharing within the global financial community helps firms understand current and emerging threats and learn how others are mitigating them. It keeps larger institutions at the forefront of cybersecurity while arming smaller firms with knowledge and tools to protect themselves. It is so critical to operational resilience that DORA dedicates an entire article to it. Beyond regulation, the public sector is also increasingly collaborating with the private sector to protect critical infrastructure, which includes the financial sector. Around the world, organizations including the US Treasury Department's Hamilton Series and NATO's Locked Shields regularly conduct large-scale exercises to test that communication and coordination channels will function efficiently during major incidents. The goal is not only to minimize operational disruption but to proactively maintain public calm and trust. Operational risks are no longer geographically bound. Cross-border intelligence sharing and exercises help financial institutions build a comprehensive approach to operational resilience.


The Top 10 Hurdles to Blockchain Adoption

One of the most significant factors that has made blockchain adoption more difficult is the overall age of the average person using banking services. Unlike previous generations, the current demographic in the world is older than ever. Advancements in healthcare and other factors have increased life expectancy in most regions of the world. ... Energy consumption issues remain a top problem in the market. Conservationists have repeatedly pointed out that networks that leverage the Proof-of-Work consensus algorithm are power-hungry. The reason for this consumption is that the PoW system requires users to exercise their computational power as part of the validation structure. To combat these issues, there has been a steady migration of mining farms to renewables. ... Another issue that has held back blockchain adoption is the lack of supportive legislation for these projects. When there is a lack of governmental support, financial institutions are wary of joining an industry. The main reason for the concern is that they fear later regulatory pushback.


Redefining the Framework of Innovation

The impact of ecosystems on digital disruption today does draw sharp parallels to another important technological evolution. Specifically, it brings to mind the evolution of manufacturing and distribution technology which enabled the transition from vertical integration to multi-tier supply networks. The twist is ecosystem models look forward, not back in the value chain, enabling entire new value chains. However, while there are many clear benefits of ecosystems, these business models are contractually, logistically, and commercially complex. This is especially true when you factor in the challenges of partnering with early-stage tech companies. So, where should leaders begin when considering a partnership or alliance? Take inventory of your most critical innovation paths and evaluate them against the ecosystem model. Key criteria may include needs for outside expertise and intellectual capital, a reduction in capital risk and accelerated innovation delivery to the market. Focus time and resources on selecting the right ecosystem partner. 


Identifying The Right Risk Appetite For Your Business

While risk appetite has a traditional outlook, risk tolerance (or impact tolerance) helps companies move closer to the path of resilience. If risk appetite tells us how much risk an organization can take, risk tolerance indicates how much risk an organization "wants" to take in numbers. Essentially, tolerances are defined losses that an organization is willing to incur in meeting an objective. Every decision bears risks. If a business accepts risk or incurs loss due to a risk event that exceeds the agreed-upon risk appetite and tolerance levels, then serious fiscal, legal and reputational consequences can occur. For this reason, risk appetite should be reevaluated and reconciled whenever changes occur to strategic initiatives or the business environment. ... Risk appetite as a concept is not new, but what is trending is linking them to resilience programs so that organizations take the right amount of risk to meet business objectives while ensuring sustainability, employee health and safety and stakeholder well-being.



Quote for the day:

"The secret of success in life is for a man to be ready for his opportunity when it comes." -- Benjamin Disraeli