Daily Tech Digest - July 11, 2023

Multiple SD-WAN vendors can complicate move to SASE

The walls between networking and security teams must come down to deliver cloud-based security and network services across today’s sophisticated networks. “The opportunity to leverage a cloud-based architecture to enforce security policies to distributed locations and remote workers is the real value of SASE. It offers management efficiencies, it supports a modern workforce, and it supports an important integration between the network and security teams,” IDC’S Butler says. “In today’s world, when you have so many people working from home and so many distributed applications, a cloud-based security approach is really appealing.” As the market continues to evolve, vendors are boosting their capabilities – networking vendors are acquiring or developing security capabilities to offer SASE, and security providers are augmenting their product portfolios with advanced networking capabilities to offer SASE. That aligns with adoption trends; a majority (68%) of 830 respondents to an IDC survey said they would like to use the same vendor for their SD-WAN and security/SASE solution.


Decoding AI: Insights and Implications for InfoSec

AI is wonderfully adept at narrow tasks, but it is clueless beyond its specific training. It’s like a super-specialist who can thread a needle blindfolded but can’t understand why it shouldn’t sew its own fingers together. Say we task an AI with making a company network as secure as possible. It might suggest shutting down the network, preventing user access or even blocking external dataflows because, hey, it’s technically efficient! ... AI could reshape the world of cybersecurity in unimaginable ways, making our lives easier and more efficient. However, it is essential to bear in mind that AI, despite its remarkable abilities, is essentially a tool. It lacks the human touch—our capacity for intuition, empathy and understanding that extends beyond the data. AI will undoubtedly keep improving, but it is on us to guide its evolution in a way that respects our shared humanity and safeguards our values. So, the next time you see a headline touting the latest AI breakthrough, take a moment to appreciate the amazing technology—but remember that it’s not quite as “intelligent” as it might seem.


Sarah Silverman sues OpenAI, Meta over copyright infringement in AI training

The suits, filed last week in federal district court in San Francisco, argued that Microsoft-backed OpenAI and Meta didn’t have permission to use copyright works by Silverman and two other authors, Christopher Golden and Richard Kadrey, when it used them to train ChatGPT and Meta's LLaMA (Large Language Model Meta AI). It asks for injunctions against the companies to prevent them from continuing similar practices, as well as unspecified monetary damages. The heart of the lawsuit, according to the complaint, is OpenAI’s use of a data set called BookCorpus, which it said was created in 2015 for the purpose of large language model training. Much of BookCorpus, the plaintiffs say, was copied from a site called Smashwords, a host for self-published novels, which were under copyright. Additionally, the complaint alleges that there is no way that the book-based data sets used to train OpenAI came entirely from legal sources, as no legal databases offer enough content to account for the size of the “Books1” and “Books2” sets.


Law firms under cyberattack

As the UK National Cyber Security Centre (NCSC) noted in a recent report focusing on cyber threats to the legal sector, law firms handle sensitive client information that cybercriminals may find useful, including exploiting opportunities for insider trading, gaining the upper hand in negotiations and litigation, or subverting the course of justice. The potential consequences of such breaches can be severe, as the disruption of business operations can incur substantial costs. Ransomware gangs specifically target law firms to extort money in exchange for allowing the restoration of business operations. In 2020, the Solicitors Regulation Authority (SRA) published a cybersecurity review revealing that 30 out of 40 of the law firms they visited have been victims of a cyberattack. In the remaining ten, cybercriminals have directly targeted their clients through legal transactions. “While not all incidents culminated in a financial loss for clients, 23 of the 30 cases in which firms were directly targeted saw a total of more than £4m [$5m+] of client money stolen,” the SRA noted.


7 IT consultant tricks CIOs should never fall for

Making a business case - Consultants love this one. It’s where the CIO engages them to build the business case for a pet project or priority — not to determine whether there’s even a business case to be made. To make one, starting with the predetermined answer and working backward from there, employing such questionable practices as cherry-picked data, one-sided analyses, inappropriate statistical tests, and selective anecdotes to name a few, defining and justifying a strategic program whose success depends on … surprise! … a major engagement for the consultant’s employer. ... Win, then hire - This is less common for delivery teams than the consultants whose work resulted in the win that created the need for the delivery team, but still … Few consultancies keep a bench of any size. As a result, winning an engagement is often far more stressful than losing one, because after winning an engagement the consultancy has no more than a month or so to hire the staff needed to execute the engagement, familiarize the newly hired staff with the methodology and practices the engagement calls for, and build a working relationship with their new managers.


Why Qubit Connectivity Matters

Of course, high-connectivity architectures are not without disadvantages. High connectivity relies on the ability to shuttle qubits around, and shuttling qubits carries several potential issues. Shuttling qubits can be a relatively slow process compared to the speed of quantum gate operations. This can increase the total computation time and reduce the number of operations that can be performed before the qubits lose coherence. The process of moving qubits introduces the risk of decoherence, which is the loss of the quantum state due to interaction with the environment. Shuttling qubits also adds an extra layer of complexity to the design of the computer, and this can be challenging to implement, especially in a large-scale system. In summary, qubit connectivity plays a vital role in the performance and functionality of quantum computers. It impacts the implementation of quantum algorithms, the creation of quantum entanglement, error correction, and the overall scalability, speed, and efficiency of quantum computing systems. When one considers the quantum modality of choice for their application, qubit connectivity should be one of the factors taken under consideration.


Analysts: Cybersecurity Funding Set for Rebound

A lot of the optimism has to do with enterprises continuing to invest heavily in cybersecurity, despite a slowdown in other expenditures. Market research firm IDC expects that organizations will spend some $219 billion this year on security products and services — or some 13% more than they did in 2022 — to address threats, to support hybrid work environments, and to meet compliance requirements. The areas that will receive the most spending are managed security services, endpoint security, network security, and identity and access management. "While the theme of conservatism and expectations for continued headwinds have remained throughout the first half of the year, we do expect to see strategic activity slowly begin to rebound in the second half of 2023 and into 2024," says Eric McAlpine, founder and managing partner of analyst firm Momentum Cyber. Financing and M&A activity will both eventually pick up as companies that were able to make do financially so far begin to feel the need for fresh capital to fuel their business, he says.


Why Enterprises Should Merge Private 5G With Programmable Communications

5G private networks provide an opportunity to integrate the application and the network so that the two can inform one another, allowing adjustments to be made in real time. Businesses not only have an improved network with a private cellular network, but they can also sync their applications with the network’s performance, enabling multiple tasks to be completed based on network performance at a specific moment. ... A new generation of digital engagement providers is looking at how these communication platforms evolve into platforms that integrate across a range of business processes. They are not only leveraging robust voice, video and messaging solutions but also introducing fully programmable computer vision and audio analytics solutions. This combination of communications and AI-based media analytics and programmability makes this evolved communications platform an ideal and unexpected solution to Industry 4.0 business needs. New communication platforms are focused less on meeting one business need but rather on the integration of communications to evolve and inform applications, making adjustments and building cost-effective efficiencies.


5 ways to prepare a new cybersecurity team for a crisis

Not all security incidents cause an enterprise-level crisis, and not all crises are cyber-related. Natural disasters, product recalls, accidents, and public relations debacles are all examples of non-cyber events that could have a significant negative impact on an organization. So, in preparing a new cybersecurity team for a crisis, it is important to define and rank--first, by severity and then by likelihood--what precisely the business would define as a security “crisis,” says John Pescatore director of emerging security trends at the SANS Institute. “It is not the case that the top of the list will always be something like ransomware,” Pescatore says. Sometimes, a crisis might have nothing to do with cybersecurity, he notes. “For example, I remember hearing a Boston-area hospital CIO talk about how they were bombarded with attempts to get into hospital data after the [Boston Marathon] bombing because press reports had noted the bombers went to that hospital.” Once the cybersecurity team has an understanding of what would constitute a security crisis for the company, create playbooks for the top handful of them.


Writing your company’s own ChatGPT policy

To help employees grasp and embrace key basics quickly, one useful starting point can be signposting relevant parts of existing policies they can check for best practices. Producing tailored guidance for an internal ChatGPT policy is slightly more complex. To develop a truly all-encompassing ChatGPT policy, companies will likely need to run extensive cross-business workshops and individual surveys which enable them to identify, and discuss, every use case. Putting in this groundwork, however, will allow them to build specific directions which ultimately ensure better protection, as well as giving workers the comprehensive knowledge required to make the most of advanced tech. ... Explicitly highlighting threats and setting unambiguous usage limitations is also just as critical to leave no room for accidental misuse. This is particularly important for businesses where generative AI may be deployed to streamline tasks that involve some level of PII, such as drafting client contracts, writing emails, or suggesting which code snippets to use in programming.



Quote for the day:

"Learning is a lifetime process, but there comes a time when we must stop adding and start updating." -- Robert Brault

Daily Tech Digest - July 10, 2023

Digital Humans: Fad or Future?

A digital human is a computer-generated entity that looks, behaves, and interacts like a real human. “To create a digital human, advanced technologies such as artificial intelligence, machine learning, and natural language processing are used to replicate the complexities of human thought and behavior,” says Matthew Ramirez, a technology entrepreneur and investor. Going beyond concierge services, digital humans could eventually play important roles in areas as diverse as education, healthcare, and entertainment. ... Although digital humans promise multiple benefits, they also present a potential threat. They could be misused in various ways to mislead, defraud, or even physically harm people, Ramirez warns. “It’s crucial to be cautious and consider the negative consequences when creating digital humans, just like with any new technology,” he says. Improvements to generative AI programs are making digital humans more realistic, which increases the possibility that consumers may have difficulty distinguishing when they’re talking to a real human versus a digital human, Bechtel says.


Feds Urge Healthcare Providers, Vendors to Use Strong MFA

CISA recommends that entities implement phishing-resistant multifactor authentication, which can help detect and prevent disclosures of authentication data to a website or application masquerading as a legitimate system, the HHS bulletin says. For instance, phishing-resistant multifactor authentication could require a password or user biometric data, combined with an authenticator such as a personal identity verification card or other cryptographic hardware or software-based token authenticator, such as FIDO with WebAuthn authenticator, according to the bulletin. "The layered defense of a properly implemented multifactor authentication solution is stronger than single-factor authentication such as relying on a password alone," HHS OCR wrote. Walsh suggested that healthcare sector entities consider integrating password vaults with MFA. Also, "passwordless authentication is probably in the future but we haven’t seen it implemented in healthcare," he said. But the bottom line, he added, is that "any MFA is probably better than no MFA."


Generative AI is coming for your job. Here are 4 reasons to get excited

Yes, the fast-emerging technology could replace some workplace activities, but it's up to us to make sure its exploitation is focused on removing repetitive tasks, such as scanning spreadsheets for data-entry errors. "I think we should be excited because it has potential to allow us to do more of the high-value things in our work, and less of the stuff that doesn't need valuable thought processes," she says. Furby says it's important to recognize that the introduction of generative AI should not be seen as an endpoint, but as a pathway to increased productivity. ... AI's ability to pick up large chunks of the work associated with everyday activities could free up internal staff to focus on more innovative and interesting projects. "I think that's always a challenge in terms of how you become more efficient in the things that you can do, and how you can approach more topics and scale at speed. And I think that's where the excitement is – generative AI could help us." For all his enthusiasm for emerging technology, Langthorne doesn't want to dismiss the concerns of people who are worried about the rise of generative systems, such as ChatGPT.


UK regulator refers cloud infrastructure market for investigation

The news comes three months after Ofcom raised “significant concerns” about Amazon Web Services (AWS) and Microsoft, alleging that they were harming competition in cloud infrastructure services and abusing their market positions with practices that make interoperability difficult. Ofcom defines cloud infrastructure services as those which are built on physical servers and virtual machines hosted in data centers and consisting of infrastructure as a service (IaaS) products, such as storage, computing and networking, and platform as a service (PaaS), which includes the software tools needed to build and run applications. When the initial investigation was launched, Ofcom said that AWS and Microsoft Azure had a combined UK market share of between 60% and 70%, while the next nearest competitor, Alphabet-owned Google, has a 5% to 10% share. Consequently, between 2018 and 2021, the percentage of cloud providers that were not AWS, Microsoft, or Google fell from 30% to 19%, causing Ofcom to note that such levels of market dominance could potentially make it harder for smaller cloud providers to compete with the market leaders, further consolidating the big providers' revenue and market share.


6 business execs you’ll meet in hell — and how to deal with them

Some executives have exactly zero aptitude when it comes to the technology that enables them to run their businesses. And you probably shouldn’t expect them to, says Bob Stevens (not his real name), former CISO for a large retail operation. After all, they’re not being paid to think about technology; they’re being paid to sell products. “The CEO at that retail company was not a technologist,” says Stevens. “He found it totally uninteresting. So when the IT and security teams would present, his attention would quickly wane and he would start answering texts and reading email. He’d say, ‘Unfortunately, technology means nothing to me. I get that it is important to the company and that we have to have it. So I will manage the business value against the cost. Just don’t try to make me understand it.’” It can be demoralizing, Stevens adds. Worse, because senior leadership doesn’t fully understand the issues in play or the threats to the business, they may not prioritize investments appropriately. 


Greatest cyber threats to aircraft come from the ground

From a CISO's perspective, what matters is not that a specific security vulnerability was found in a particular model of aircraft, but rather the general idea that modern aircraft with interconnected IT networks could potentially allow intrusions into high security avionics equipment from low security passenger internet access systems. This being the case, the time has come for all onboard aircraft systems -- including avionics -- to be regarded as being vulnerable to cyberattacks. As such, the security procedures for protecting them should be as thorough and in-depth "as any other internet-connected device," Kiley says. "The disclosure I did in 2019 was the first major one that involved the industry, the airlines, and the US government cooperating to ensure that the disclosure was done responsibly and following security industry best practices. This should be a model for how to alert the industry of an issue responsibly." Unfortunately, "Many manufacturers in the aviation industry do not understand how to work with security researchers and instead attempt to stifle research by threatening action instead of working together to solve identified issues," observes Kiley.


Monolith or Microservices, or Both: Building Modern Distributed Applications in Go with Service Weaver

Google’s new open source project Service Weaver provides the idea of decoupling the code from how code is deployed. Service Weaver is a programming framework for writing and deploying cloud applications in the Go programming language, where deployment decision can be delegated to an automated runtime. Service Weaver lets you deploy your application as monolith and microservices. Thus, it’s a best of both world of monolith and microservices. With Service Weaver, you write your application as modular monolith, where you modularise your application using components. Components in Service Weaver, modelled as Go interfaces, for which you provide a concrete implementation for your business logic without coupling with networking or serialisation code. A component is a kind of an Actor that represents a computational entity. These modular components, which built around core business logic, can call methods of other components like a local method call regardless of whether these components are running as a modular binary or microservices without using HTTP or RPC. 


Private 5G/LTE growing more slowly than expected

The use cases for private cellular networks are numerous and varied, according to IDC, encompassing everything from wide-area applications like grid networks for utility systems and transport networks to local networks for manufacturing facilities or warehouses. Yet three factors have continued to slow the growth of private cellular, which IDC defines as 5G/LTE networks that don’t share traffic between users, as a public network would. The first is slower-than-expected availability of the latest 5G chipsets, specifically those for releases 17 and 18 from 3GPP — the cellular technology standards body — which are designed to improve ultra-reliable, low-latency communications. That creates a drag on particularly advanced new implementations, particularly in the industrial sector, that can be created with private networks, the report said. In the short-term, that means that LTE will account for the bulk of spending on private cellular networks, according to the report, not to be superseded by 5G spending until 2027. Difficulties with integrating private cellular into existing network infrastructure is also slowing growth, IDC noted. 


Red Hat kicked off a tempest in a teapot

We never seem to learn from history. I was part of the United Linux effort in the early 2000s while working at Novell. Scared by Red Hat’s early popularity, a group of would-be contenders to the Red Hat throne, including SUSE, Turbolinux, Conectiva, and Caldera (which became SCO Group), banded together to try to define a common, competitive distribution. It failed. Completely. As I’ve written, “It turns out the market didn’t want a common Linux distribution created by committee. They wanted the industry standard, which happened to be Red Hat.” Fast forward to 2023, and no one is clamoring for a resurrected United Linux, but CentOS had become a way for people to use RHEL without paying for it. It was, in some ways, a United Linux that actually worked, as it gave the companies behind Rocky and Alma Linux a way to compete without contributing. Now that’s gone, and there’s much hand-wringing over how hard it will be to continue delivering Red Hat’s product for free. Rocky Linux assures us it will be possible, in a poorly named post about this “Brave New World.”


Who Should Pay for Payment Scams - Banks, Telcos, Big Tech?

"The banking sector is the only sector reimbursing at the moment, and our belief is that the burden should be spread. I think tech companies should be putting their hands in their pockets, particularly as they profit from it," said David Postings, chief executive of UK Finance. In a letter last week to Prime Minister Rishi Sunak, a group of major U.K. banks said technology companies must contribute to the cost of the online fraud "pandemic" that is undermining international investor confidence in the U.K. economy, according to a report on Sky News. It makes sense for social media companies and others to be held accountable for scams. Users of Facebook, Instagram, Twitter and other platforms have fallen prey to romance scams, cryptocurrency investment scams and more. But before the government starts looking for ways to ask big tech to contribute, let's not forget about the victims. It might be difficult to prove which platform is liable and for how much. Social media conversations are often fluid and move from one platform to another. Tracing back the conversation and then establishing the responsibility across banks and tech companies could take time. 



Quote for the day:

"Leadership is a two-way street, loyalty up and loyalty down." -- Grace Murray Hopper

Daily Tech Digest - July 09, 2023

Data should be a first-class citizen in the cloud

A close cousin of the interoperability problem, data access and control are limited in many cloud environments if not designed properly and can prevent organizations from truly harnessing their business data. There doesn’t seem to be a middle ground here; either data is entirely accessible or not at all. Mostly, the controller is turned off and valuable data goes unleveraged and systems are underoptimized. You only need to look at the rise of generative AI systems to understand how this limitation affects the value of these systems. If the data is not accessible, then the knowledge engines can’t be appropriately trained. You’ll have dumb AI. This lack of control is due to opaque data ownership models and limited data processing and storage control. The solution is for organizations to create greater transparency and control over their data. This includes defining access privileges, managing encryption, and deciding how and where data is stored. This would ensure that data owners retain sovereignty and information is still available.


Where Data Governance Must Focus in AI Era

In recent years, the ethical implications of AI have come to the forefront of public discussion. Data governance reinforces the importance of adhering to ethical practices in the development and deployment of AI systems. Transparency and accountability should be the pillars upon which AI technologies are built. Generative AI and large language models have the ability to create and manipulate human-like content. This power must be wielded responsibly. Data governance requires developers and organizations to embed ethical guidelines within the AI systems themselves, ensuring that these technologies align with society’s values and do not increase biases or the delivery of misinformation. ... Data governance recognizes the importance of individual autonomy in an AI-driven world. It seeks to empower individuals with the ability to exercise control over their own data and determine how it is utilized. By placing decision-making power in the hands of data subjects, we uphold the fundamental principles of self-determination and personal agency.


The Need for Risk-Based Vulnerability Management to Combat Threats

In comparison to traditional and outdated approaches to vulnerability management, a risk-based strategy enables organizations to assess the level of risk posed by vulnerabilities. This approach allows teams to prioritize vulnerabilities based on their assessed risk levels and remediate those with higher risks, minimizing potential attacks in a way that is hassle-free, continuous, and automated. Over 90% of successful cyberattacks involve exploitation of unpatched vulnerabilities and in result the demand for automated patch management solutions is increasing as organization seeking a smarter and more efficient vulnerability remediation strategy than those employed in the past. ... In the face of today’s threats, it is crucial to have actionable insights based on risk that can drive security remediation efforts forward. By continuously assessing your entire attack surface, Outscan NX tools can pinpoint the most pressing threats, saving your security team valuable time and resources. The Outscan NX are a comprehensive suite of internal and external network scanning and cloud security tools customized to suit the unique needs of your organization.


13 go-to podcasts that inspire IT industry leaders today

Risky Business is a weekly cybersecurity news and current events podcast hosted by Patrick Gray and Adam Boileau. I listen to it because they do an excellent job curating the most relevant news and events in cybersecurity that occurred in the previous week. Gray is a journalist with deep cybersecurity knowledge and Boileau is an executive director at a cybersecurity firm, so the presentation is professional and includes insights on threat actors and motivations. ... I find Gartner’s CIO Mind podcast to be especially insightful and relevant to the work I’m doing. It covers a wide range of topics that CIOs are grappling with, from the recession and cost-cutting, to staffing specialized IT roles and employee retention. It keeps me tuned in to what others in the industry care about and what keeps them up at night, and it gets me thinking about ways I can improve my own organization so we can better support our clients. The podcast also shares advice from Gartner analysts and other experts that I can apply to my own organization and leverage to prepare for what’s coming, such as generative AI, workforce trends, research and development investment trends, and more.


IoT brings resource gains, sustainability to agriculture

Long-range, low-power wireless solutions equip farmers with the data they need in order to achieve their goals of increasing yield and minimizing environmental impact. Lacuna Space is expanding Long-Range WAN (LoRaWAN) coverage with satellites and LoRa technology to increase connectivity for low-coverage areas. With the ability to have reliable connectivity despite location, more farmers around the world can gather data that enables them to make informed decisions about irrigation, fertilization and more to improve crop yield and monitor water usage. Farmers in areas without cellular or Wi-Fi signals can now receive the same technological advancements as those in more connected areas. This supports smarter agricultural practices throughout the world, bringing access to tools that improve operations and crop yield to more individuals in the industry. WaterBit, a precision agriculture irrigation company, gives farmers the ability to have real-time, low-cost IoT sensing systems that improve crop quality and yield through optimized resource use.


Risk Assessment Using Blockchain

Blockchain technology promises new ways to conduct risk assessments; it helps to create a distributed, transparent, and tamper-proof system for assessing risks. Not only can this standardize and streamline the process but also improve the accuracy and reliability of results. A point to note is that blockchain can only increase accuracy and make the process more efficient. It cannot replace human judgment and auditing expertise. It can enhance the auditing process by ensuring the integrity of transactions’ and events’ records. ... Decentralized data storage eliminates the chances of a single point of failure, along with reducing the risk of data loss or corruption. One of the key advantages of using blockchain technology is that it allows for decentralized data storage. During risk assessments, information collected can be stored on the blockchain, making it more secure and less vulnerable to attack. Additionally, the distributed nature of blockchain technology means that multiple stakeholders can access and update the data, improving collaboration and ensuring that everyone is working from the same information.


How can organizations maintain data governance when using generative AI?

The key to the reliability and trust of generative AI responses is combining them with cognitive enterprise search technology. As mentioned, this combination generates responses from enterprise data, and users can validate the information source. Each answer is provided in the user’s context, always accounting for data permissions from the data source with full compliance. In addition, these tools ensure data is consistently up-to-date by delta crawling. Integrating generative AI tools into a trusted knowledge management solution allows employees to see which documents their information came from and even provide further explanations.  ... Firstly, leadership must evaluate the potential impact of the generated content on the organization’s reputation, brand image, and the effectiveness it will have on the specific business unit. Legal and ethical implications and ensuring compliance with regulations and guidelines are necessary considerations, just like any other deployed technology.


Responsible tech ecosystems: pragmatic precursors to responsible regulation

Regulatory technology (regtech) is typically two-fold: compliance tech when regulated firms use it and supervisory tech when regulators use it. As regulators monitor and enforce compliance, regtech presents new opportunities to formulate frameworks. For instance, today, AI activities of market firms are governed under disparate regulations such as data protection, consumer protection, financial services regulations etc. However, threats of unfairness, explainability and accountability are yet to be addressed. Regulatory gaps expose unmitigated risks that supervisory technology can resolve. In a perfect world, even without prompts from regtech, organizations should adopt measures to address these gaps and work towards diversity and transparency, which have a direct impact on their AI models. Not every innovation or its ensuing disruption needs to be welcomed. We see this in the raging debate in the AI & Art spaces. Any regulator has the moral obligation to react to emerging technology, even if post-facto. 


Crossing the Data Divide: Closing Gaps in Perception of Data as Corporate Asset

What I am suggesting is that our data leaders need to elevate their vision and messaging to describe a new type of system that is the authoritative reference for all enterprise data assets. This new type of system needs to take its place next to the ERP, CRM, and HRM systems within the enterprise. This means it must provide value for everyone, both technical and non-technical, and also provide context for data assets that include its trustworthiness, source, owner, experts, reviews, and much more, all wrapped in a consumer-grade user interface experience. What is this system? I call it a social data fabric (SDF). That term has been used lightly in the social media world, but I am commandeering it for our purposes. I define an SDF system as a combination of an enterprise data catalog and an internal marketplace where employees can explore and ‘shop’ for data. The catalog portion of the system should ingest and manage a broad number of data, business intelligence, and data-related assets such as term glossaries, KPIs, analytic models, and business processes. 


Executive Q&A: Controlling Cloud Egress Costs

For smaller enterprises, egress charges are fairly minimal as most data resides in a single cloud region and is accessed within that region. For larger enterprises, the number of scenarios which incur egress fees is higher. One such scenario is implementing a hybrid cloud for cost management or a multicloud to make use of the latest optimized computing hardware that might not be available in the primary cloud. For these scenarios, egress fees might be as high as a third of the cloud service expense with naive implementations. More optimal implementations can bring down the egress cost but still fall short as more management complexity is introduced and operations staff needs to be hired to compensate. The reason such fees come as a surprise is that it's hard to predict how much data is going to be accessed across regions, and usually this number only increases with time. ... Moving raw data across network boundaries is infeasible. Building a federation layer to query across all curated data is key. 



Quote for the day:

"Power should be reserved for weightlifting and boats, and leadership really involves responsibility." -- Herb Kelleher

Daily Tech Digest - July 08, 2023

10 ways SecOps can strengthen cybersecurity with ChatGPT

ChatGPT is proving effective at predicting potential threat and intrusion scenarios based on real-time analysis of monitoring data across enterprise networks, combined with the knowledge base the LLMs supporting them are constantly creating. One CISO running a ChatGPT pilot says the goal is to test whether the system can differentiate between false positives and actual threats. The most valuable aspect of the pilot so far is the LLMs’ potential in analyzing the massive amount of threat intelligence data the organization is capturing and then providing contextualized, real-time and relevant insights to SOC analysts. ... Knowing that manual misconfigurations of cybersecurity and threat detection systems are one of the leading causes of breaches, CISOs are interested in how ChatGPT can help identify and recommend configuration improvements by interpreting the data indicators of compromise (IoCs) provided. The goal is to find out how best to fine-tune configurations to minimize the false positives sometimes caused by IoC-based alerts triggered by a less-than-optimal configuration.


The Interplay of IGA, IAM and GRC for Comprehensive Protection in Cloud Transitions

Managing user access in separate applications that each have their own security rules can be tricky. Consider an example of an employee who has had different roles in the same organization over time. With each new role, this person might have gained more security permissions in systems such as JD Edwards or SAP. The more permissions they have, the higher the chance of fraud or breaking a segregation of duties (SoD) rule, which says that no one person should have control over 2 conflicting business tasks. To make this example even clearer, imagine that this employee also has access to a different system, such as PeopleSoft, because of work on a project. Now they have access across multiple systems, and keeping track of what they can do becomes more challenging. ... There are tools that can help lower this risk by displaying details about user access and what the users are doing with their access, but often, these tools only show part of the picture, especially when it comes to complex security models and multiple applications, or are siloed into addressing only a singular application.


Applying the MACH Architecture: Lessons Learned

By designing APIs first, they were able to ensure a smoother, more cohesive development process. This approach has enabled them to take advantage of the robust capabilities of their API gateway, streamlining their processes and fostering efficient communication between various teams. The shift to a cloud-native approach, leveraging SAP-managed cloud, private and public clouds, has enhanced their scalability and flexibility while reducing operational overhead. The combination of these approaches has resulted in a highly efficient, reliable, and scalable e-commerce platform. Embracing headless architecture has led to a transformation in their front-end development. By decoupling the front end from the backend, they have made it easier to make changes and updates to their Angular-based frontend applications, leading to a better user experience. ... Furthermore, the ability of MACH architecture to handle peak loads effectively is particularly relevant in the e-commerce industry. 


How to cultivate a culture of continuous cybersecurity improvement

The interplay between real-time and periodic security practices is central to effective vulnerability management. Since each has its own unique value proposition, a robust cyber defense strategy must blend both types of practices into a unified approach. Real-time security practices are indispensable in a world where threats emerge and evolve in a blink of an eye. For instance, endpoint detection and vulnerability detection must be ongoing processes. They offer a pulse on the network, alerting organizations to threats as they surface. A lapse in real-time activities can spell disaster: recent ransomware attacks have demonstrated that vulnerabilities can be exploited in mere hours, and sometimes less. An effective real-time security system provides the crucial window needed to detect and rectify vulnerabilities before they’re exploited. On the other hand, periodic security practices, such as penetration testing, provide an opportunity to stress-test the system and uncover potential weaknesses. Still, their value should not be overstated. 


Data is not a Microservice

The purpose of a microservice is to power an aspect of some customer experience. Its primary function is operational. The purpose of data is decision-making. Its primary function is TRUTH. How that truth is used can be operational (like an ML model) or analytical (answering some interesting question). Businesses already collect large volumes of data at tremendous speed and dump raw logs into lakes for data engineers to sort through later. Data developers struggle because the data they have taken dependencies on has no ownership, the underlying meaning is not clear, and when something changes from a source system very few people know why and what they should expect the new 'truth' to be as a result. In data, our largest problems are rooted in a lack of trust. In my opinion, a source of truth is an explicitly owned, well-managed, semantically valid data asset that represents an accurate representation of real-world entities or events reflected in code. In the traditional on-premise Data Warehouse, an experienced data architect was responsible for defining the source of truth in a monolithic environment.


Revolutionizing the Nine Pillars of SRE With AI-Engineered Tools

Applying AI to SRE is a complex process with certain challenges. Here are some potential pitfalls along with ways to address them: Lack of Quality Data: AI and machine learning models are only as good as the data they are trained on. Inadequate or poor quality data can lead to inaccurate predictions and insights; Prioritize data hygiene and governance. Collect comprehensive and diverse data from your systems; ensure that it is well-structured and free of errors and store it in a way that’s easily accessible for training AI models; Over-reliance on Automation: While AI can greatly enhance automation, relying on it too heavily without human oversight can lead to missed signals or overcorrections in response to false positives; Maintain a balance between automation and human oversight. Use AI to support decision-making, not replace it entirely. It’s important to have experienced SREs review AI outputs regularly to ensure they make sense and are beneficial; Underestimating the Need for AI Expertise: Implementing AI is not just about buying and deploying a tool. 


LockBit Hits TSMC for $70 Million Ransom: What CIOs Can Learn

TSMC has not given any public indication of how it plans to respond to LockBit’s demand. Bill Bernard, area vice president of cybersecurity company Deepwatch, believes it is unlikely the chipmaker will give in and pay the ransomware gang. “They’re claiming very publicly that the data gathered was not damaging to their ability to do business or to their customers. If true, there’s very little motivation for them to pay this extortion,” he tells InformationWeek. Refusal to pay would be a part of a larger trend observed over the past year or so, according to Bernard. He notes there have been “…more attempted ransomware events, but fewer payouts as businesses see the cost of recovery being significantly less than the cost of the ransom.” Even if refusal to pay is the less expensive option, companies still face consequences in the wake of an attack like this. “If TSMC opts not to pay, it could face short-term operational disruption, potential data loss, and the leak of sensitive information, damaging its reputation and breaching customer trust,” explains Ani Chaudhuri, CEO of data security company Dasera.


Why Are Team Topologies Essential for Software Architecture and Software Development 

Efficiency?"Team Topologies" suggests leveraging Conway's Law as a strategic advantage in software architecture. The book proposes that architects can encourage or discourage certain types of designs by shaping the organization and team structures. As Ruth Malan points out, "If we have managers deciding which services will be built, by which teams, we implicitly have managers deciding on the system architecture." This reinforces the critical role of architects and engineering professionals in actively structuring team topologies and their communications and responsibilities. Unfortunately, in many companies, team topologies are determined without adequately considering the expertise of architects and engineering professionals. This lack of involvement can lead to architectural misalignments and inefficiencies. To ensure successful architectural outcomes, it is crucial for organizations to actively involve architects and engineering professionals in decisions related to team topologies. Their knowledge and insights can help shape team structures that align with architectural goals and foster effective communication and collaboration.


4 tips to improve employee experiences while maintaining security and governance

IT security leaders recognize that cyberthreats and attack vectors continually evolve. However, staying ahead of cybercriminals is not Job 1 for employees who simply want to get their work done. Within that context, it’s important to maintain regular, ongoing education and training, said the experts: “Continuously educate and engage. Regularly communicate with employees about the importance of security and governance controls. Offer training sessions, workshops, and awareness programs to educate employees on best practices.” ... In this regard, the enterprise browser can serve as a point of dialog between IT and business users to better understand each other’s needs. “No one wants to be blocked from accessing a particular app or website,” said Lorena Crowley, Head of Chrome Enterprise Marketing at Google. “The browser becomes an educational opportunity for users to learn why an extension is blocked, and for admins to learn about why an extension or website is important for users to get their work done.”


Slimming Down .NET: The Unofficial Experiments of Michal Strehovský

This episode features an interview with Michal Strehovský, a developer on the .NET runtime team who has been experimenting with reducing the size of .NET applications. Strehovský’s experiments have led him to create BFlat and Flattened.NET, personal projects that allow .NET developers to play with the technology and non-.NET developers to get into .NET. One of his experiments involved creating a self-contained WinForms Snake game in C# that was under 8KB in size. By using unsupported territories like ahead-of-time compilation and trimming, and even writing his own core library to work around missing pieces of the runtime, Strehovský was able to achieve this impressive feat. The standard .NET publishing process includes the entire runtime and base class libraries, resulting in a large executable, but trimming can be used to remove unnecessary components. However, the runtime itself cannot be trimmed. Native AoT can be used to compile the entire app ahead of time, resulting in a smaller runtime and smaller app size.



Quote for the day:

"Learning is a lifetime process, but there comes a time when we must stop adding and start updating." -- Robert Brault

Daily Tech Digest - July 06, 2023

Man vs machine: a secure email firm aim to bring post-quantum cryptography to the cloud

"While quantum computers will soon be able to decrypt 'normally' encrypted data quite easily, they will cut their teeth on post-quantum secure encryption," said Pfau. Tutanota's plan is using a hybrid encryption approach—at first, at least. All data will be encrypted using both classical and the new post-quantum proof algorithms. This double protection will make sure that the new algorithms have time to prove themselves as actually safe. PQDrive is the last step into Tutanota's post-quantum challenge. The company started its mission three years ago with PQMail to make both their email and calendar apps both post-quantum resistant. The team has already begun to add the new algorithms into the software, which should be fully updated for all its 10 million users by 2024. Pfau is very happy that the algorithms the team chose to work with years ago were awarded among the best choice of secure post-quantum encryption by the National Institute of Standards and Technology (NIST).


To close the skills gap, stop focusing on skills candidates don’t have

Modern hiring strategies should be designed to bring employment opportunities to under-represented talent communities, including people with disabilities, women of colour and members of the military and their spouses. Bridging the gap between under-represented communities and equitable job opportunities can help fuel growth and close the talent gap. Likewise, it’s important to work with likeminded organizations that operate with under-represented communities to empower the next generation of skilled workers by giving back and creating new career pathways for them. For example, since 2015, Equinix has partnered with World Pulse, a global, online community that connects and amplifies women’s voices, as well as provides digital empowerment training. The partnership can create a new career pathway for women around the world with digital skills and resources and help close the digital divide’s gender disparity in an organic, grassroots way to maximize the impact within these communities.


New Chinese Counterespionage Law Aimed at US Tech Sector

The revised law grants "state security organs," the armed forces, the CCP and public institutions the power to proactively respond to all forms of network attacks, attacks on critical information infrastructure, and those that aim to obstruct, control or disrupt government functions. It also gives the government power to take legal action against foreign institutions suspected of carrying out espionage activities. "Acts of espionage endangering the PRC's national security that are carried out, instigated or funded by foreign institutions, organizations or individuals, or that are carried out by domestic institutions, organizations or individuals colluding with foreign institutions, organizations or individuals must be legally pursued," it reads. The revised law also gives agencies the power to "inspect the electronic equipment, facilities and related programs and tools of relevant individuals and organizations," and seal or seize property if the entity under investigation fails to employ immediate corrective measures.


5 ways to boost server efficiency

According to the Uptime Institute report, power management can increase latency by 20 to 80 microseconds, which is unacceptable for some types of workloads, such as financial trading. "And there are some applications where you might decide not to use it because it will cause performance or response time problems," he says. But there are other applications where delays won’t have a business impact. "The biggest mistake is that some operators are risk averse," he says. "They think that if they're going to save a couple of hundred bucks a server on their energy bill but are risking breaking their SLA which will cost them a million dollars, they're not going to turn [power management] on." Dietrich recommends that when companies buy new servers and run their performance tests, make sure they test whether power management affects the applications adversely or not. "If it doesn't bother them, then you can use power management," he says. "You can implement a set of power-management functions that will let you save energy and still provide response time and performance that your customers want."


CIOs, Heed On-Premises App and Infrastructure Performance

As more applications run across on-premises and cloud environments, IT teams responsible for managing availability and performance face significant challenges. Today, most IT departments use separate tools to monitor on-premises and cloud applications, which brings a lack of visibility across the entire application path in hybrid environments. IT leaders can’t visualize the path up and down the application stack and they can’t derive business context, making it virtually impossible to troubleshoot issues quickly. This leaves them in a firefighting mode to solve issues before they affect end users. An IT department’s worst nightmare, like an outage or even damaging downtime, surges when metrics such as MTTR and MTTX inevitably rise. To avoid these issues, IT teams require an observability platform for unified visibility across their entire IT estate. Through this platform, IT leaders can access real-time insights of IT availability and performance across both on-premises and public cloud environments and are able to correlate IT data with real-time business metrics, allowing them to prioritize issues that matter most to customers and the business.


IBM shutters Cloud for Education service just two years after launch

IBM didn’t really give any official reason for the closure, saying simply that it regularly evaluates its cloud service offerings while keeping things like customer requirements and consumption in perspective. The service will continue to operate as normal until Nov. 30, and customers are being invited to talk with IBM’s representatives about the steps they can take to migrate their data and workloads to an alternative platform. Holger Mueller of Constellation Research Inc. told SiliconANGLE that Cloud for Education clearly wasn’t as successful as the company had hoped it would be, because it wouldn’t retire the offering otherwise. “But it’s good to see IBM is retiring the service in a respectful way, giving its customers several months to work out how they’re going to migrate their workloads to an alternative platform,” Mueller said. “Generally, most cloud vendors will only give their customers 30 days notice when they decide to sunset a service.” The shutdown may have a somewhat negative impact on IBM’s cloud reputation though, given how it has struggled to achieve the same kind of success as its rivals, Amazon Web Services Inc., Microsoft Corp., Google LLC and even Oracle Corp.


Making intelligent automation work at scale

“We continue to make significant progress in operating with a digital-first mindset and reimaging our end-to-end processes with IA,” says Ajay Anand, vice president of strategy and business services for Global Services at J&J. “We are using insights from our IA maturity assessment efforts to identify large untapped value pools to drive visibility with our executive committee and functional leaders,” Anand says. “In addition, we are also focused on developing a framework for generative AI use case development and prioritization.” The enterprise IA program is delivering on “experience, effectiveness, and efficiency — giving our employees more time to focus on creative innovations and upskilling,” says Steve Sorensen, vice president of technology services, supply chain, data integration, and reliability engineering at J&J. “It is enabling the reimagining, simplifying, and digitizing processes for employees, patients, healthcare professionals, and other stakeholders, while delivering significant value for the organization.”


How Much Architecture Modeling Should You Do? Just Enough – Part 1

The fundamental challenge with JBGE is that it is situational. For example, I often draw a diagram on a whiteboard to explore complex logic and then discard it once I’m done with it. In this case a whiteboard diagram is fine because it helps me to solve the issue which I’m thinking through with whomever I’m working. But, what if we’re in a situation where we’ll need to update this logic later AND will want to do it via the diagram instead of via source code? Clearly a hand-drawn sketch isn’t good enough in this case and we’ll want to create a detailed diagram using a sophisticated software-based tool. We’d still create an agile model even though it is much more sophisticated than a sketch because JBGE reflects the needs of the situation. To determine if an architecture model is JBGE you must actively work with the direct audience of that artifact. In the case of a business architecture model this would be both your business stakeholders and the implementation team(s) that are going to work with the model. Without knowing what the audience wants, you cannot realistically create something which is JBGE, putting you in a situation where you’re motivated to put far more effort into the artifact than you need to.


Unmasking Deepfakes: Defending Against a Growing Threat

“The same truth about authentication of audio or visual content is true about authentication in the technical systems of identity.” Amper says while the technology is maturing rapidly toward lifelike, intelligent impersonations, the human eye can still spot blurring around the ears or hairline, unnatural blinking patterns, or differences in image resolution. “Color amplification tools that visualize blood flow or ML algorithms trained on spectral analysis are equally effective at detecting and vetting extreme behavior,” he says. He says although contemporary deepfakes are extremely well-done and increasingly hard to recognize, digital identity verification and liveness detection can authenticate a person’s unique identity markers. Once a user has been confirmed as the genuine owner of the real-world identity they are claiming, deep convolutional neural networks can be trained and leveraged for biometric liveness checks including textural analysis, geometry calculation, or traditional challenge-response mechanisms to verify if the person presented on screen is real.


Promoting responsible AI: Balancing innovation and regulation

From a cybersecurity perspective, we must address privacy and security concerns. Bad actors are successfully using confidentiality attacks to draw out sensitive information from AI systems. Without proper security measures, institutions and individuals are at risk. To protect students, for example, institutions may put in place policies curbing the use of AI tools in specific instances or provide educational content cautioning them against sharing confidential information with AI platforms. Algorithmic biases, inaccuracies, overgeneralizations represent intrinsic limitations of the technology since the models are a reflection of the data they are trained on. Even if care is taken to ensure input data is fact-checked and accurate, hallucinations may still occur. Therefore, a human element is still important in the use of AI. Fact checks and discerning eyes can help weed out inaccuracies. Councils guided by community-oriented ethical guidelines can help reduce biases.



Quote for the day:

"Speak softly and carry a big stick; you will go far." -- Theodore Roosevelt