Daily Tech Digest - July 13, 2023

Industry groups call for changes to EU Cyber Resiliency Act

The first recommendation made by the collective is that the proposed scope of the CRA should be made narrower and clearer. "Any reference to 'remote data processing solutions' should be excluded from the scope of the CRA to ensure legal clarity, and to avoid overlaps with existing legislation and unnecessary burden," they wrote. Software as a service, platform as a service, or infrastructure as a service should not be considered within the scope of the CRA, and this clarification should be reflected in the core legal text to provide greater legal certainty and to facilitate implementation across the EU, the recommendation read. ... The second recommendation calls for a more proportionate approach to determining a product's risk-level, along with greater certainty for manufacturers to ascertain if a product is deemed a critical one. "A transparent and inclusive review process involving economic operators should be set up to determine whether a product is critical," the groups wrote. This would avoid wrongfully designating too many products as "critical," making them more expensive...


AI’s Impact on Security, Risk and Governance in a Hybrid Cloud World

To build an AI-driven compliance, security and governance solution, you must first be able to scale and learn from large data sets. To learn from the data, you must build training models for the data to be processed effectively by the AI component. These training models require the ability to analyze and operate at scale and support different training models for different use cases. Since we need to analyze and operate at scale continuously, we have moved from the underlying tech of machine learning (ML) to deep learning (DL) based on neural net technology. With this technology, we can detect, analyze and prioritize the findings. The second part of this is auto-remediation; this enables us to understand where the problem is developing and what actions, if taken, would create the biggest impact. This prioritization technique driven by AI and our proprietary technology working together creates a scenario of a self-healing environment. In this environment, a problem is addressed before it becomes a serious issue. 


9 tips for recruiting high-end IT talent

“Create a brand and reputation to attract this kind of talent to the work you do and your company’s culture,” says Drees. “That could be LinkedIn content or articles you post on your company site.” It could be stories in the news about your company or what personnel and clients say about the company in social media. ... “Give people the ability to grow, mature, and evolve,” says Majeed, whose leadership team has spent a great deal of time, thought, and money on this idea, focusing on creating a culture that nurtures and incubates talent, going so far as to build customized learning programs that encourage people to learn new technical skills and to grow their career. “We also give people so much flexibility to do what they want to do,” he says. This might sound like a distraction from work — time consuming, perhaps, or expensive. But it’s effective, he says. “It makes people more productive — they are working with passion and purpose.” ... “Leverage the engineers on your team, who are excited about the challenges they’re solving,” says Drees.


Combatting data governance risks of public generative AI tools

Integration enables users to obtain answers or sentences derived from enterprise data relevant to their queries. While publicly available generative AI tools permit natural language querying, world wide web data is not always applicable to the use case. Knowledge management solutions connect data from various data sources and business applications to consolidate the data into a central knowledge base. When it comes to querying about a customer or details of a business document, this is the only way to retrieve answers based on specific company entities. Additionally, delta crawling (i.e., crawling for new data only) certifies that the model’s data is always up to date, so users aren’t receiving old and obsolete information. ... ChatGPT and other publicly available models, like Google Bard, do not cite where their outputs came from. So, how do you know if the content came from a reliable source versus an opinionated blog or insignificant public forum? Adding the source allows users to open the corresponding document or file and view all the details to confirm accuracy and gain further insight into their query.


Civil society groups call on EU to put human rights at centre of AI Act

The groups are therefore calling on the EU institutions to draw clear limits on the use of AI by national security, law enforcement and migration authorities, particularly when it comes to “harmful and discriminatory” surveillance practices. They say these limits must include a full ban on real-time and retrospective "remote biometric identification" technologies in publicly accessible spaces, by all actors and without exception; a prohibition on all forms of predictive policing; a removal of all loopholes and exemptions for law enforcement and migration control; and a full ban on emotion recognition systems. They added the EU should also reject the Council’s attempt to include a blanket exemption for systems developed or deployed for national security purposes; and prohibit the use of AI in migration contexts to make individualised risk assessments, or to otherwise “interdict, curtail and prevent” migration. The groups are also calling for the EU to properly empower members of the public to understand and challenge the use of AI systems


The Challenges and Rewards of Zero Trust Privacy

A primary challenge that occurs with the implementation of zero trust privacy is the lack of a compliance footprint. A compliance footprint is a list of all the laws, regulations and standards the organization must adhere to. Often, companies do not have a team or individual responsible to monitor changes in the compliance landscape. Failure to do this impacts privacy compliance and the ability to implement zero trust privacy. Organizations cannot guarantee that the system architecture restricts the flow of data beyond that which is legal because they do not know their obligations. We see this today with the increase in privacy fines that have been issued for inappropriate collection and transmission of personal data. Another challenge is that organizations often start with identity and access management. When users’ access and authorization permissions are enabled for an unknown set of data elements, organizations cannot guarantee compliance with least privilege requirements.


Microsoft jumps into competitive security service edge (SSE) arena

Analysts say Microsoft, while a late to the market, will be a welcome player in the SSE arena given its large customer base. “Cisco, Palo Alto Networks, Symantec, and Zscaler have a multi-year start over Microsoft. Gaining momentum in a crowded market will take work,” wrote Dell ‘Oro Group research director, Mauricio Sanchez in a blog about the SSE announcement. “Everyone knows who Microsoft is and generally enjoys substantial goodwill among its customer base. A large salesforce and partner ecosystem will open many doors,” Sanchez stated. “Large enterprises that are strong Microsoft shops and take advantage of Microsoft’s Enterprise Licensing Agreement benefits could lead to significant uptake of Microsoft SSE solution.” Also, no other SSE vendor has the same identity vendor chops that Microsoft brings. SSE is identity-heavy, which Microsoft can exploit by owning the identity use cases end-to-end, Sanchez stated. Microsoft Windows and Office 365 clients can preview the SSE software, and it will be generally available for other operating systems later this year.


The obsession advantage in transformation

During tough times, it’s easy to look at customers as a means to an end—a way to drive revenue and help your bottom line. But that’s a terrible approach; your customer also is going through the same difficult times, and this is your chance to support them. Obsess about their pain points and learn how you can be there for them. Work from my PwC colleagues has shown that when companies wire a deep understanding of customers into their business models, operations, and decision-making, they not only increase value for customers, but gain insights that help to further differentiate the business. ... The most transformation-ready leaders look to other innovative approaches to gain new perspectives. Whether this is through conversations with executives in different industries, speaking with sports coaches or sociologists, reading and researching relevant case studies, or speaking one-to- one with more junior employees at your own company, gaining a new perspective can often lead to powerful inspiration. Don’t wait for these views to come to you, either. 


Building a Data Driven Organization

"The key lies in democratizing data assets and their utilization by providing user-friendly tools, offering literacy courses, and promoting approaches that enable employees across the organization to generate insights," he says. He adds it is not enough for top management to merely include data-driven initiatives in their business strategy -- they must visibly and consistently support the cultural transformation. "This involves actively measuring progress, recognizing early adopters as champions, and rewarding them accordingly," he says. "Holding leaders accountable for driving cultural change in their respective areas is essential." ... The data governance element is also critical, which means establishing goals, measurements, and continuous improvement practices to maximize the value derived from data and ensure user satisfaction. "Set clear objectives for data utilization, monitoring performance against these goals, and consistently refining processes to optimize data-driven practices," he says. By implementing these practices, organizations can foster a data-driven culture where employees are equipped with the necessary tools, skills, and mindset to leverage data effectively in their decision-making processes.


Leap to leader: Make yourself heard

It’s not just a matter of going into a meeting and asking for a raise or promotion. Instead, imagine how an agent or headhunter would represent you. How would they make the case for you getting the job or the raise you deserve? And remember, it’s not just your boss you have to convince; your goal is to give them specifics so that they can go make a case for you to their boss and to HR. Ground the conversation in facts. What have you accomplished? How has your work helped drive the business? Can you point to concrete ways in which you’ve added value? ... There’s a mental loop people can get caught in that might keep them from pushing for more money, whether negotiating for a raise or for a pay package that comes with the new job. “I don’t want to rock the boat,” they say to themselves. “I want to make sure things start on a positive note. I’m grateful for the opportunity.” As a result, they settle too quickly. But for more senior roles, the person on the other side of the table is expecting you to push, and they’ve probably built in some negotiating room for when you do start pushing.



Quote for the day:

"It is not fair to ask of others what you are not willing to do yourself." -- Eleanor Roosevelt

Daily Tech Digest - July 12, 2023

4 collaboration security mistakes companies are still making

If organizations don’t provide access to vetted collaboration tools, employees will likely find their own and use insecure solutions, said Sourya Biswas, technical director, risk management and governance at security consulting firm NCC Group. “Therefore, while it’s important for organizations to embrace digital collaboration, at the same time they should prevent installation and use of unapproved tools, via mechanisms such as restricted local admin access and managed browser solutions.” Even when collaboration tools are vetted and approved, organizations must be cognizant of the different collaboration platforms that each employee is allowed to access in order to prevent sensitive data from being exfiltrated and avoid providing new attack vectors for bad actors, said Michael McCracken, senior director of end user solutions at SHI International, a reseller of technology products and services. In addition, IT needs to maintain central control over these tools, said AJ Yawn, partner, risk assurance advisory at Armanino, an independent accounting and business consulting firm.


EC Says European Private Data Can Flow to Compliant US Companies

The business community had been waiting for guidance on how data privacy policy might look in the EU, says Dona Fraser, senior vice president of privacy initiatives with BBB National Programs, a nonprofit that oversees national, industry self-regulation programs. With the former EU-US Privacy Shield rendered invalid in 2020 by the European Court of Justice, new policy was needed. Fraser says companies wanted to comply and be able to safely conduct business without worry of intervention or whether or not their consumers were being treated properly, but policy was in limbo. The announcement about the new framework seems to have restored confidence in the program. “This week,” she says, “we’ve received an enormous amount of inquiries from current and past participants saying, ‘What's next, what do we do?’ The eagerness that we’re hearing in the marketplace is, for us, from a business perspective, it’s great to hear.” Logistics of the framework and the approval process for businesses still need to be worked out, Fraser says, but now the door is open for companies that halted work with data from Europe to reemerge.


CISO perspective on why boards don’t fully grasp cyber attack risks

A CISO needs to understand the knowledge and background of the board members to be able to translate technical jargon into business language and something familiar with the target audience. I approach this by relating technical jargon to everyday situations or business scenarios, something the board can easily grasp. To be effective at this style of communication, I collaborate with other business leaders outside of the technology groups to optimize business alignment. Focusing on the potential business impact of cybersecurity risk also allows a CISO to frame technical issues in terms of their consequences such as financial loss or damage to the company’s brand. It is equally important to be concise and avoid over-embellishing cyber-risks, while still focusing on the strategic objectives you are asking the board to weigh in on. To bridge the gap between board members and CISOs to promote the mitigation of cyber-risk, it is essential that a CISO enhance communication, educate board members about cybersecurity risks and promote a collaborative approach to decision making.


Data Management at Scale

If your company already has a high level of data management maturity or is decentrally organized, then you can begin with a more decentralized approach to data management. However, to align your decentralized teams, you will need to set standards and principles and make technology choices for shared capabilities. These activities need to happen at a central level and require superb leaders and good architects. I’ll come back to these points toward the end of this chapter, when discussing the role of enterprise architects. Besides the starting point, there are other aspects to take into consideration with regard to centralization and decentralization. First, you should determine your goals for the end of your journey. If your intended end state is a decentralized architecture, but you’ve decided to start centrally, the engineers building the architecture should be aware of this from the beginning. With the longer-term vision in mind, engineers can make capabilities more loosely coupled, allowing for easier decentralization at a later point in time.


Designing High-Performance APIs

By incorporating specific design principles, developers can build APIs that scale effectively and operate efficiently. Here are key considerations for building scalable and efficient APIs: Stateless Design: Implement a stateless architecture where each API request contains all the necessary information for processing. This design approach eliminates the need for maintaining a session state on the server, allowing for easier scalability and improved performance. Use Resource-Oriented Design: Embrace a resource-oriented design approach that models API endpoints as resources. This design principle provides a consistent and intuitive structure, enabling efficient data access and manipulation. Employ Asynchronous Operations: Use asynchronous processing for long-running or computationally intensive tasks. By offloading such operations to background processes or queues, the API can remain responsive, preventing delays and improving overall efficiency. Horizontal Scaling: Design the API to support horizontal scaling, where additional instances of the API can be deployed to handle increased traffic. 


Why SUSE is forking Red Hat Enterprise Linux

To understand what’s happening here, we need to go back a few years. In late 2020, Red Hat made a crucial change to CentOS Linux (the Community Enterprise Linux Operating System). For the longest time, CentOS was essentially the free (as in beer) version of Red Hat Enterprise Linux (RHEL), Red Hat’s flagship distribution. Red Hat acquired CentOS in 2014 after a lot of turmoil in the CentOS community and gained a permanent majority on the CentOS board. “The CentOS project was in trouble,” Gunnar Hellekson, Red Hat’s VP and GM for Red Hat Enterprise Linux, told me. “At the same time, we needed a way to collaborate with other communities — OpenStack in particular at the time. And we said, well, here’s an opportunity! We can take the CentOS project. Now we have something that is freely available and close enough to RHEL to do the development on — and then that gives us a way to work in the community. And then when customers move into production, they can go on to Red Hat Enterprise Linux.”


The Disconnected State of Enterprise Risk Management

Compliance, with its myriad frameworks, standards and mandates, remains the primary means by which we assess and maintain the risk posture of our national, defense and private sector entities. Compliance is how we gauge our resilience, determine shortcomings and prioritize mitigation efforts to resolve them. Compliance, ostensibly, is how we determine where to point our limited security resources in the form of controls to ensure protection against threats. And yet, while the threats occur in real time, our compliance efforts remain relegated to a historical reporting function, capturing our prior state at best or, worse yet, someone’s subjective opinion of an organization’s security posture. After all, most compliance programs today are best characterized as “opinion farming at scale,” built on surveys or manual assessments of controls by human analysts, who in turn depend on the cooperation and information of countless system owners. No matter how high you stack those opinions, they don’t turn into facts. 


Downsides to using cloud autoscaling systems

Autoscaling can reduce costs by optimizing resource utilization, but savings are not guaranteed. I have seen autoscaling systems lead to unexpected cost increases. For example, rapid and frequent scaling operations can generate additional charges that are often unexpected. This will undoubtedly happen if resources are not managed efficiently. I’ve seen unpredictable workload patterns or sudden spikes in demand trigger autoscaling processes. This results in more instances or resources provisioned, but also a potentially enormous cloud bill. The only way to work around this is to carefully analyze and forecast workload patterns to balance scalability and cost-effectiveness. ... Certain applications don’t work well with autoscaling systems. Legacy or monolithic applications that rely on static configurations or have complex interdependencies may not perform very well with autoscaling systems. Of course, there is a fix, normally rewriting a portion of the entire application to leverage autoscaling more efficiently.


Defining the CISO Role

CISOs are tasked with the strategic leadership of information security for their companies. This can entail building a cybersecurity program and overseeing the teams that execute the policies that underpin that program. The responsibilities are many and varied. For example, Heins is responsible for incident response, security engineering and operations, identity and access management, cloud and application security, and governance, risk, and compliance. Effectively implementing cybersecurity demands that CISOs spend much of their time engaging with stakeholders throughout an organization: board members, other executives, and people in other departments. They also spend part of their time on external engagement. Meg Anderson, vice president and CISO of investment management and insurance company Principal Financial Group, notes that she talks with her CISO peers about emerging threats and best practices. That part of the job can help CISOs think about how to structure their programs effectively and build a pipeline of talent for the future.


Security First! Strategies for Building Safer Software

Having security involved in the initial stages of a software development process always made sense, as with bug fixing, it is faster and cheaper to address security issues early on. But, particularly in larger enterprises, it was rarely done in practice. By the same token, individual development teams would tend not to invest in security if they saw it as the role of a dedicated security team and thus somebody else’s problem. This pushed security to the right, as one of the things that happened between development and deploying to production, where security becomes more difficult and often less effective. It also led to friction between the development and security teams, since the two groups had conflicting goals: Developers were under pressure to ship more features more quickly, and saw security as a gatekeeper, slowing down or even halting development to allow time to investigate issues. At its most extreme, developers felt, security’s ideal situation would be that nothing would be deployed to production at all — after all, if nothing is running, then nothing can get hacked.



Quote for the day:

"One must be convinced to convince, to have enthusiasm to stimulate the others." -- Stefan Zweig

Daily Tech Digest - July 11, 2023

Multiple SD-WAN vendors can complicate move to SASE

The walls between networking and security teams must come down to deliver cloud-based security and network services across today’s sophisticated networks. “The opportunity to leverage a cloud-based architecture to enforce security policies to distributed locations and remote workers is the real value of SASE. It offers management efficiencies, it supports a modern workforce, and it supports an important integration between the network and security teams,” IDC’S Butler says. “In today’s world, when you have so many people working from home and so many distributed applications, a cloud-based security approach is really appealing.” As the market continues to evolve, vendors are boosting their capabilities – networking vendors are acquiring or developing security capabilities to offer SASE, and security providers are augmenting their product portfolios with advanced networking capabilities to offer SASE. That aligns with adoption trends; a majority (68%) of 830 respondents to an IDC survey said they would like to use the same vendor for their SD-WAN and security/SASE solution.


Decoding AI: Insights and Implications for InfoSec

AI is wonderfully adept at narrow tasks, but it is clueless beyond its specific training. It’s like a super-specialist who can thread a needle blindfolded but can’t understand why it shouldn’t sew its own fingers together. Say we task an AI with making a company network as secure as possible. It might suggest shutting down the network, preventing user access or even blocking external dataflows because, hey, it’s technically efficient! ... AI could reshape the world of cybersecurity in unimaginable ways, making our lives easier and more efficient. However, it is essential to bear in mind that AI, despite its remarkable abilities, is essentially a tool. It lacks the human touch—our capacity for intuition, empathy and understanding that extends beyond the data. AI will undoubtedly keep improving, but it is on us to guide its evolution in a way that respects our shared humanity and safeguards our values. So, the next time you see a headline touting the latest AI breakthrough, take a moment to appreciate the amazing technology—but remember that it’s not quite as “intelligent” as it might seem.


Sarah Silverman sues OpenAI, Meta over copyright infringement in AI training

The suits, filed last week in federal district court in San Francisco, argued that Microsoft-backed OpenAI and Meta didn’t have permission to use copyright works by Silverman and two other authors, Christopher Golden and Richard Kadrey, when it used them to train ChatGPT and Meta's LLaMA (Large Language Model Meta AI). It asks for injunctions against the companies to prevent them from continuing similar practices, as well as unspecified monetary damages. The heart of the lawsuit, according to the complaint, is OpenAI’s use of a data set called BookCorpus, which it said was created in 2015 for the purpose of large language model training. Much of BookCorpus, the plaintiffs say, was copied from a site called Smashwords, a host for self-published novels, which were under copyright. Additionally, the complaint alleges that there is no way that the book-based data sets used to train OpenAI came entirely from legal sources, as no legal databases offer enough content to account for the size of the “Books1” and “Books2” sets.


Law firms under cyberattack

As the UK National Cyber Security Centre (NCSC) noted in a recent report focusing on cyber threats to the legal sector, law firms handle sensitive client information that cybercriminals may find useful, including exploiting opportunities for insider trading, gaining the upper hand in negotiations and litigation, or subverting the course of justice. The potential consequences of such breaches can be severe, as the disruption of business operations can incur substantial costs. Ransomware gangs specifically target law firms to extort money in exchange for allowing the restoration of business operations. In 2020, the Solicitors Regulation Authority (SRA) published a cybersecurity review revealing that 30 out of 40 of the law firms they visited have been victims of a cyberattack. In the remaining ten, cybercriminals have directly targeted their clients through legal transactions. “While not all incidents culminated in a financial loss for clients, 23 of the 30 cases in which firms were directly targeted saw a total of more than £4m [$5m+] of client money stolen,” the SRA noted.


7 IT consultant tricks CIOs should never fall for

Making a business case - Consultants love this one. It’s where the CIO engages them to build the business case for a pet project or priority — not to determine whether there’s even a business case to be made. To make one, starting with the predetermined answer and working backward from there, employing such questionable practices as cherry-picked data, one-sided analyses, inappropriate statistical tests, and selective anecdotes to name a few, defining and justifying a strategic program whose success depends on … surprise! … a major engagement for the consultant’s employer. ... Win, then hire - This is less common for delivery teams than the consultants whose work resulted in the win that created the need for the delivery team, but still … Few consultancies keep a bench of any size. As a result, winning an engagement is often far more stressful than losing one, because after winning an engagement the consultancy has no more than a month or so to hire the staff needed to execute the engagement, familiarize the newly hired staff with the methodology and practices the engagement calls for, and build a working relationship with their new managers.


Why Qubit Connectivity Matters

Of course, high-connectivity architectures are not without disadvantages. High connectivity relies on the ability to shuttle qubits around, and shuttling qubits carries several potential issues. Shuttling qubits can be a relatively slow process compared to the speed of quantum gate operations. This can increase the total computation time and reduce the number of operations that can be performed before the qubits lose coherence. The process of moving qubits introduces the risk of decoherence, which is the loss of the quantum state due to interaction with the environment. Shuttling qubits also adds an extra layer of complexity to the design of the computer, and this can be challenging to implement, especially in a large-scale system. In summary, qubit connectivity plays a vital role in the performance and functionality of quantum computers. It impacts the implementation of quantum algorithms, the creation of quantum entanglement, error correction, and the overall scalability, speed, and efficiency of quantum computing systems. When one considers the quantum modality of choice for their application, qubit connectivity should be one of the factors taken under consideration.


Analysts: Cybersecurity Funding Set for Rebound

A lot of the optimism has to do with enterprises continuing to invest heavily in cybersecurity, despite a slowdown in other expenditures. Market research firm IDC expects that organizations will spend some $219 billion this year on security products and services — or some 13% more than they did in 2022 — to address threats, to support hybrid work environments, and to meet compliance requirements. The areas that will receive the most spending are managed security services, endpoint security, network security, and identity and access management. "While the theme of conservatism and expectations for continued headwinds have remained throughout the first half of the year, we do expect to see strategic activity slowly begin to rebound in the second half of 2023 and into 2024," says Eric McAlpine, founder and managing partner of analyst firm Momentum Cyber. Financing and M&A activity will both eventually pick up as companies that were able to make do financially so far begin to feel the need for fresh capital to fuel their business, he says.


Why Enterprises Should Merge Private 5G With Programmable Communications

5G private networks provide an opportunity to integrate the application and the network so that the two can inform one another, allowing adjustments to be made in real time. Businesses not only have an improved network with a private cellular network, but they can also sync their applications with the network’s performance, enabling multiple tasks to be completed based on network performance at a specific moment. ... A new generation of digital engagement providers is looking at how these communication platforms evolve into platforms that integrate across a range of business processes. They are not only leveraging robust voice, video and messaging solutions but also introducing fully programmable computer vision and audio analytics solutions. This combination of communications and AI-based media analytics and programmability makes this evolved communications platform an ideal and unexpected solution to Industry 4.0 business needs. New communication platforms are focused less on meeting one business need but rather on the integration of communications to evolve and inform applications, making adjustments and building cost-effective efficiencies.


5 ways to prepare a new cybersecurity team for a crisis

Not all security incidents cause an enterprise-level crisis, and not all crises are cyber-related. Natural disasters, product recalls, accidents, and public relations debacles are all examples of non-cyber events that could have a significant negative impact on an organization. So, in preparing a new cybersecurity team for a crisis, it is important to define and rank--first, by severity and then by likelihood--what precisely the business would define as a security “crisis,” says John Pescatore director of emerging security trends at the SANS Institute. “It is not the case that the top of the list will always be something like ransomware,” Pescatore says. Sometimes, a crisis might have nothing to do with cybersecurity, he notes. “For example, I remember hearing a Boston-area hospital CIO talk about how they were bombarded with attempts to get into hospital data after the [Boston Marathon] bombing because press reports had noted the bombers went to that hospital.” Once the cybersecurity team has an understanding of what would constitute a security crisis for the company, create playbooks for the top handful of them.


Writing your company’s own ChatGPT policy

To help employees grasp and embrace key basics quickly, one useful starting point can be signposting relevant parts of existing policies they can check for best practices. Producing tailored guidance for an internal ChatGPT policy is slightly more complex. To develop a truly all-encompassing ChatGPT policy, companies will likely need to run extensive cross-business workshops and individual surveys which enable them to identify, and discuss, every use case. Putting in this groundwork, however, will allow them to build specific directions which ultimately ensure better protection, as well as giving workers the comprehensive knowledge required to make the most of advanced tech. ... Explicitly highlighting threats and setting unambiguous usage limitations is also just as critical to leave no room for accidental misuse. This is particularly important for businesses where generative AI may be deployed to streamline tasks that involve some level of PII, such as drafting client contracts, writing emails, or suggesting which code snippets to use in programming.



Quote for the day:

"Learning is a lifetime process, but there comes a time when we must stop adding and start updating." -- Robert Brault

Daily Tech Digest - July 10, 2023

Digital Humans: Fad or Future?

A digital human is a computer-generated entity that looks, behaves, and interacts like a real human. “To create a digital human, advanced technologies such as artificial intelligence, machine learning, and natural language processing are used to replicate the complexities of human thought and behavior,” says Matthew Ramirez, a technology entrepreneur and investor. Going beyond concierge services, digital humans could eventually play important roles in areas as diverse as education, healthcare, and entertainment. ... Although digital humans promise multiple benefits, they also present a potential threat. They could be misused in various ways to mislead, defraud, or even physically harm people, Ramirez warns. “It’s crucial to be cautious and consider the negative consequences when creating digital humans, just like with any new technology,” he says. Improvements to generative AI programs are making digital humans more realistic, which increases the possibility that consumers may have difficulty distinguishing when they’re talking to a real human versus a digital human, Bechtel says.


Feds Urge Healthcare Providers, Vendors to Use Strong MFA

CISA recommends that entities implement phishing-resistant multifactor authentication, which can help detect and prevent disclosures of authentication data to a website or application masquerading as a legitimate system, the HHS bulletin says. For instance, phishing-resistant multifactor authentication could require a password or user biometric data, combined with an authenticator such as a personal identity verification card or other cryptographic hardware or software-based token authenticator, such as FIDO with WebAuthn authenticator, according to the bulletin. "The layered defense of a properly implemented multifactor authentication solution is stronger than single-factor authentication such as relying on a password alone," HHS OCR wrote. Walsh suggested that healthcare sector entities consider integrating password vaults with MFA. Also, "passwordless authentication is probably in the future but we haven’t seen it implemented in healthcare," he said. But the bottom line, he added, is that "any MFA is probably better than no MFA."


Generative AI is coming for your job. Here are 4 reasons to get excited

Yes, the fast-emerging technology could replace some workplace activities, but it's up to us to make sure its exploitation is focused on removing repetitive tasks, such as scanning spreadsheets for data-entry errors. "I think we should be excited because it has potential to allow us to do more of the high-value things in our work, and less of the stuff that doesn't need valuable thought processes," she says. Furby says it's important to recognize that the introduction of generative AI should not be seen as an endpoint, but as a pathway to increased productivity. ... AI's ability to pick up large chunks of the work associated with everyday activities could free up internal staff to focus on more innovative and interesting projects. "I think that's always a challenge in terms of how you become more efficient in the things that you can do, and how you can approach more topics and scale at speed. And I think that's where the excitement is – generative AI could help us." For all his enthusiasm for emerging technology, Langthorne doesn't want to dismiss the concerns of people who are worried about the rise of generative systems, such as ChatGPT.


UK regulator refers cloud infrastructure market for investigation

The news comes three months after Ofcom raised “significant concerns” about Amazon Web Services (AWS) and Microsoft, alleging that they were harming competition in cloud infrastructure services and abusing their market positions with practices that make interoperability difficult. Ofcom defines cloud infrastructure services as those which are built on physical servers and virtual machines hosted in data centers and consisting of infrastructure as a service (IaaS) products, such as storage, computing and networking, and platform as a service (PaaS), which includes the software tools needed to build and run applications. When the initial investigation was launched, Ofcom said that AWS and Microsoft Azure had a combined UK market share of between 60% and 70%, while the next nearest competitor, Alphabet-owned Google, has a 5% to 10% share. Consequently, between 2018 and 2021, the percentage of cloud providers that were not AWS, Microsoft, or Google fell from 30% to 19%, causing Ofcom to note that such levels of market dominance could potentially make it harder for smaller cloud providers to compete with the market leaders, further consolidating the big providers' revenue and market share.


6 business execs you’ll meet in hell — and how to deal with them

Some executives have exactly zero aptitude when it comes to the technology that enables them to run their businesses. And you probably shouldn’t expect them to, says Bob Stevens (not his real name), former CISO for a large retail operation. After all, they’re not being paid to think about technology; they’re being paid to sell products. “The CEO at that retail company was not a technologist,” says Stevens. “He found it totally uninteresting. So when the IT and security teams would present, his attention would quickly wane and he would start answering texts and reading email. He’d say, ‘Unfortunately, technology means nothing to me. I get that it is important to the company and that we have to have it. So I will manage the business value against the cost. Just don’t try to make me understand it.’” It can be demoralizing, Stevens adds. Worse, because senior leadership doesn’t fully understand the issues in play or the threats to the business, they may not prioritize investments appropriately. 


Greatest cyber threats to aircraft come from the ground

From a CISO's perspective, what matters is not that a specific security vulnerability was found in a particular model of aircraft, but rather the general idea that modern aircraft with interconnected IT networks could potentially allow intrusions into high security avionics equipment from low security passenger internet access systems. This being the case, the time has come for all onboard aircraft systems -- including avionics -- to be regarded as being vulnerable to cyberattacks. As such, the security procedures for protecting them should be as thorough and in-depth "as any other internet-connected device," Kiley says. "The disclosure I did in 2019 was the first major one that involved the industry, the airlines, and the US government cooperating to ensure that the disclosure was done responsibly and following security industry best practices. This should be a model for how to alert the industry of an issue responsibly." Unfortunately, "Many manufacturers in the aviation industry do not understand how to work with security researchers and instead attempt to stifle research by threatening action instead of working together to solve identified issues," observes Kiley.


Monolith or Microservices, or Both: Building Modern Distributed Applications in Go with Service Weaver

Google’s new open source project Service Weaver provides the idea of decoupling the code from how code is deployed. Service Weaver is a programming framework for writing and deploying cloud applications in the Go programming language, where deployment decision can be delegated to an automated runtime. Service Weaver lets you deploy your application as monolith and microservices. Thus, it’s a best of both world of monolith and microservices. With Service Weaver, you write your application as modular monolith, where you modularise your application using components. Components in Service Weaver, modelled as Go interfaces, for which you provide a concrete implementation for your business logic without coupling with networking or serialisation code. A component is a kind of an Actor that represents a computational entity. These modular components, which built around core business logic, can call methods of other components like a local method call regardless of whether these components are running as a modular binary or microservices without using HTTP or RPC. 


Private 5G/LTE growing more slowly than expected

The use cases for private cellular networks are numerous and varied, according to IDC, encompassing everything from wide-area applications like grid networks for utility systems and transport networks to local networks for manufacturing facilities or warehouses. Yet three factors have continued to slow the growth of private cellular, which IDC defines as 5G/LTE networks that don’t share traffic between users, as a public network would. The first is slower-than-expected availability of the latest 5G chipsets, specifically those for releases 17 and 18 from 3GPP — the cellular technology standards body — which are designed to improve ultra-reliable, low-latency communications. That creates a drag on particularly advanced new implementations, particularly in the industrial sector, that can be created with private networks, the report said. In the short-term, that means that LTE will account for the bulk of spending on private cellular networks, according to the report, not to be superseded by 5G spending until 2027. Difficulties with integrating private cellular into existing network infrastructure is also slowing growth, IDC noted. 


Red Hat kicked off a tempest in a teapot

We never seem to learn from history. I was part of the United Linux effort in the early 2000s while working at Novell. Scared by Red Hat’s early popularity, a group of would-be contenders to the Red Hat throne, including SUSE, Turbolinux, Conectiva, and Caldera (which became SCO Group), banded together to try to define a common, competitive distribution. It failed. Completely. As I’ve written, “It turns out the market didn’t want a common Linux distribution created by committee. They wanted the industry standard, which happened to be Red Hat.” Fast forward to 2023, and no one is clamoring for a resurrected United Linux, but CentOS had become a way for people to use RHEL without paying for it. It was, in some ways, a United Linux that actually worked, as it gave the companies behind Rocky and Alma Linux a way to compete without contributing. Now that’s gone, and there’s much hand-wringing over how hard it will be to continue delivering Red Hat’s product for free. Rocky Linux assures us it will be possible, in a poorly named post about this “Brave New World.”


Who Should Pay for Payment Scams - Banks, Telcos, Big Tech?

"The banking sector is the only sector reimbursing at the moment, and our belief is that the burden should be spread. I think tech companies should be putting their hands in their pockets, particularly as they profit from it," said David Postings, chief executive of UK Finance. In a letter last week to Prime Minister Rishi Sunak, a group of major U.K. banks said technology companies must contribute to the cost of the online fraud "pandemic" that is undermining international investor confidence in the U.K. economy, according to a report on Sky News. It makes sense for social media companies and others to be held accountable for scams. Users of Facebook, Instagram, Twitter and other platforms have fallen prey to romance scams, cryptocurrency investment scams and more. But before the government starts looking for ways to ask big tech to contribute, let's not forget about the victims. It might be difficult to prove which platform is liable and for how much. Social media conversations are often fluid and move from one platform to another. Tracing back the conversation and then establishing the responsibility across banks and tech companies could take time. 



Quote for the day:

"Leadership is a two-way street, loyalty up and loyalty down." -- Grace Murray Hopper

Daily Tech Digest - July 09, 2023

Data should be a first-class citizen in the cloud

A close cousin of the interoperability problem, data access and control are limited in many cloud environments if not designed properly and can prevent organizations from truly harnessing their business data. There doesn’t seem to be a middle ground here; either data is entirely accessible or not at all. Mostly, the controller is turned off and valuable data goes unleveraged and systems are underoptimized. You only need to look at the rise of generative AI systems to understand how this limitation affects the value of these systems. If the data is not accessible, then the knowledge engines can’t be appropriately trained. You’ll have dumb AI. This lack of control is due to opaque data ownership models and limited data processing and storage control. The solution is for organizations to create greater transparency and control over their data. This includes defining access privileges, managing encryption, and deciding how and where data is stored. This would ensure that data owners retain sovereignty and information is still available.


Where Data Governance Must Focus in AI Era

In recent years, the ethical implications of AI have come to the forefront of public discussion. Data governance reinforces the importance of adhering to ethical practices in the development and deployment of AI systems. Transparency and accountability should be the pillars upon which AI technologies are built. Generative AI and large language models have the ability to create and manipulate human-like content. This power must be wielded responsibly. Data governance requires developers and organizations to embed ethical guidelines within the AI systems themselves, ensuring that these technologies align with society’s values and do not increase biases or the delivery of misinformation. ... Data governance recognizes the importance of individual autonomy in an AI-driven world. It seeks to empower individuals with the ability to exercise control over their own data and determine how it is utilized. By placing decision-making power in the hands of data subjects, we uphold the fundamental principles of self-determination and personal agency.


The Need for Risk-Based Vulnerability Management to Combat Threats

In comparison to traditional and outdated approaches to vulnerability management, a risk-based strategy enables organizations to assess the level of risk posed by vulnerabilities. This approach allows teams to prioritize vulnerabilities based on their assessed risk levels and remediate those with higher risks, minimizing potential attacks in a way that is hassle-free, continuous, and automated. Over 90% of successful cyberattacks involve exploitation of unpatched vulnerabilities and in result the demand for automated patch management solutions is increasing as organization seeking a smarter and more efficient vulnerability remediation strategy than those employed in the past. ... In the face of today’s threats, it is crucial to have actionable insights based on risk that can drive security remediation efforts forward. By continuously assessing your entire attack surface, Outscan NX tools can pinpoint the most pressing threats, saving your security team valuable time and resources. The Outscan NX are a comprehensive suite of internal and external network scanning and cloud security tools customized to suit the unique needs of your organization.


13 go-to podcasts that inspire IT industry leaders today

Risky Business is a weekly cybersecurity news and current events podcast hosted by Patrick Gray and Adam Boileau. I listen to it because they do an excellent job curating the most relevant news and events in cybersecurity that occurred in the previous week. Gray is a journalist with deep cybersecurity knowledge and Boileau is an executive director at a cybersecurity firm, so the presentation is professional and includes insights on threat actors and motivations. ... I find Gartner’s CIO Mind podcast to be especially insightful and relevant to the work I’m doing. It covers a wide range of topics that CIOs are grappling with, from the recession and cost-cutting, to staffing specialized IT roles and employee retention. It keeps me tuned in to what others in the industry care about and what keeps them up at night, and it gets me thinking about ways I can improve my own organization so we can better support our clients. The podcast also shares advice from Gartner analysts and other experts that I can apply to my own organization and leverage to prepare for what’s coming, such as generative AI, workforce trends, research and development investment trends, and more.


IoT brings resource gains, sustainability to agriculture

Long-range, low-power wireless solutions equip farmers with the data they need in order to achieve their goals of increasing yield and minimizing environmental impact. Lacuna Space is expanding Long-Range WAN (LoRaWAN) coverage with satellites and LoRa technology to increase connectivity for low-coverage areas. With the ability to have reliable connectivity despite location, more farmers around the world can gather data that enables them to make informed decisions about irrigation, fertilization and more to improve crop yield and monitor water usage. Farmers in areas without cellular or Wi-Fi signals can now receive the same technological advancements as those in more connected areas. This supports smarter agricultural practices throughout the world, bringing access to tools that improve operations and crop yield to more individuals in the industry. WaterBit, a precision agriculture irrigation company, gives farmers the ability to have real-time, low-cost IoT sensing systems that improve crop quality and yield through optimized resource use.


Risk Assessment Using Blockchain

Blockchain technology promises new ways to conduct risk assessments; it helps to create a distributed, transparent, and tamper-proof system for assessing risks. Not only can this standardize and streamline the process but also improve the accuracy and reliability of results. A point to note is that blockchain can only increase accuracy and make the process more efficient. It cannot replace human judgment and auditing expertise. It can enhance the auditing process by ensuring the integrity of transactions’ and events’ records. ... Decentralized data storage eliminates the chances of a single point of failure, along with reducing the risk of data loss or corruption. One of the key advantages of using blockchain technology is that it allows for decentralized data storage. During risk assessments, information collected can be stored on the blockchain, making it more secure and less vulnerable to attack. Additionally, the distributed nature of blockchain technology means that multiple stakeholders can access and update the data, improving collaboration and ensuring that everyone is working from the same information.


How can organizations maintain data governance when using generative AI?

The key to the reliability and trust of generative AI responses is combining them with cognitive enterprise search technology. As mentioned, this combination generates responses from enterprise data, and users can validate the information source. Each answer is provided in the user’s context, always accounting for data permissions from the data source with full compliance. In addition, these tools ensure data is consistently up-to-date by delta crawling. Integrating generative AI tools into a trusted knowledge management solution allows employees to see which documents their information came from and even provide further explanations.  ... Firstly, leadership must evaluate the potential impact of the generated content on the organization’s reputation, brand image, and the effectiveness it will have on the specific business unit. Legal and ethical implications and ensuring compliance with regulations and guidelines are necessary considerations, just like any other deployed technology.


Responsible tech ecosystems: pragmatic precursors to responsible regulation

Regulatory technology (regtech) is typically two-fold: compliance tech when regulated firms use it and supervisory tech when regulators use it. As regulators monitor and enforce compliance, regtech presents new opportunities to formulate frameworks. For instance, today, AI activities of market firms are governed under disparate regulations such as data protection, consumer protection, financial services regulations etc. However, threats of unfairness, explainability and accountability are yet to be addressed. Regulatory gaps expose unmitigated risks that supervisory technology can resolve. In a perfect world, even without prompts from regtech, organizations should adopt measures to address these gaps and work towards diversity and transparency, which have a direct impact on their AI models. Not every innovation or its ensuing disruption needs to be welcomed. We see this in the raging debate in the AI & Art spaces. Any regulator has the moral obligation to react to emerging technology, even if post-facto. 


Crossing the Data Divide: Closing Gaps in Perception of Data as Corporate Asset

What I am suggesting is that our data leaders need to elevate their vision and messaging to describe a new type of system that is the authoritative reference for all enterprise data assets. This new type of system needs to take its place next to the ERP, CRM, and HRM systems within the enterprise. This means it must provide value for everyone, both technical and non-technical, and also provide context for data assets that include its trustworthiness, source, owner, experts, reviews, and much more, all wrapped in a consumer-grade user interface experience. What is this system? I call it a social data fabric (SDF). That term has been used lightly in the social media world, but I am commandeering it for our purposes. I define an SDF system as a combination of an enterprise data catalog and an internal marketplace where employees can explore and ‘shop’ for data. The catalog portion of the system should ingest and manage a broad number of data, business intelligence, and data-related assets such as term glossaries, KPIs, analytic models, and business processes. 


Executive Q&A: Controlling Cloud Egress Costs

For smaller enterprises, egress charges are fairly minimal as most data resides in a single cloud region and is accessed within that region. For larger enterprises, the number of scenarios which incur egress fees is higher. One such scenario is implementing a hybrid cloud for cost management or a multicloud to make use of the latest optimized computing hardware that might not be available in the primary cloud. For these scenarios, egress fees might be as high as a third of the cloud service expense with naive implementations. More optimal implementations can bring down the egress cost but still fall short as more management complexity is introduced and operations staff needs to be hired to compensate. The reason such fees come as a surprise is that it's hard to predict how much data is going to be accessed across regions, and usually this number only increases with time. ... Moving raw data across network boundaries is infeasible. Building a federation layer to query across all curated data is key. 



Quote for the day:

"Power should be reserved for weightlifting and boats, and leadership really involves responsibility." -- Herb Kelleher

Daily Tech Digest - July 08, 2023

10 ways SecOps can strengthen cybersecurity with ChatGPT

ChatGPT is proving effective at predicting potential threat and intrusion scenarios based on real-time analysis of monitoring data across enterprise networks, combined with the knowledge base the LLMs supporting them are constantly creating. One CISO running a ChatGPT pilot says the goal is to test whether the system can differentiate between false positives and actual threats. The most valuable aspect of the pilot so far is the LLMs’ potential in analyzing the massive amount of threat intelligence data the organization is capturing and then providing contextualized, real-time and relevant insights to SOC analysts. ... Knowing that manual misconfigurations of cybersecurity and threat detection systems are one of the leading causes of breaches, CISOs are interested in how ChatGPT can help identify and recommend configuration improvements by interpreting the data indicators of compromise (IoCs) provided. The goal is to find out how best to fine-tune configurations to minimize the false positives sometimes caused by IoC-based alerts triggered by a less-than-optimal configuration.


The Interplay of IGA, IAM and GRC for Comprehensive Protection in Cloud Transitions

Managing user access in separate applications that each have their own security rules can be tricky. Consider an example of an employee who has had different roles in the same organization over time. With each new role, this person might have gained more security permissions in systems such as JD Edwards or SAP. The more permissions they have, the higher the chance of fraud or breaking a segregation of duties (SoD) rule, which says that no one person should have control over 2 conflicting business tasks. To make this example even clearer, imagine that this employee also has access to a different system, such as PeopleSoft, because of work on a project. Now they have access across multiple systems, and keeping track of what they can do becomes more challenging. ... There are tools that can help lower this risk by displaying details about user access and what the users are doing with their access, but often, these tools only show part of the picture, especially when it comes to complex security models and multiple applications, or are siloed into addressing only a singular application.


Applying the MACH Architecture: Lessons Learned

By designing APIs first, they were able to ensure a smoother, more cohesive development process. This approach has enabled them to take advantage of the robust capabilities of their API gateway, streamlining their processes and fostering efficient communication between various teams. The shift to a cloud-native approach, leveraging SAP-managed cloud, private and public clouds, has enhanced their scalability and flexibility while reducing operational overhead. The combination of these approaches has resulted in a highly efficient, reliable, and scalable e-commerce platform. Embracing headless architecture has led to a transformation in their front-end development. By decoupling the front end from the backend, they have made it easier to make changes and updates to their Angular-based frontend applications, leading to a better user experience. ... Furthermore, the ability of MACH architecture to handle peak loads effectively is particularly relevant in the e-commerce industry. 


How to cultivate a culture of continuous cybersecurity improvement

The interplay between real-time and periodic security practices is central to effective vulnerability management. Since each has its own unique value proposition, a robust cyber defense strategy must blend both types of practices into a unified approach. Real-time security practices are indispensable in a world where threats emerge and evolve in a blink of an eye. For instance, endpoint detection and vulnerability detection must be ongoing processes. They offer a pulse on the network, alerting organizations to threats as they surface. A lapse in real-time activities can spell disaster: recent ransomware attacks have demonstrated that vulnerabilities can be exploited in mere hours, and sometimes less. An effective real-time security system provides the crucial window needed to detect and rectify vulnerabilities before they’re exploited. On the other hand, periodic security practices, such as penetration testing, provide an opportunity to stress-test the system and uncover potential weaknesses. Still, their value should not be overstated. 


Data is not a Microservice

The purpose of a microservice is to power an aspect of some customer experience. Its primary function is operational. The purpose of data is decision-making. Its primary function is TRUTH. How that truth is used can be operational (like an ML model) or analytical (answering some interesting question). Businesses already collect large volumes of data at tremendous speed and dump raw logs into lakes for data engineers to sort through later. Data developers struggle because the data they have taken dependencies on has no ownership, the underlying meaning is not clear, and when something changes from a source system very few people know why and what they should expect the new 'truth' to be as a result. In data, our largest problems are rooted in a lack of trust. In my opinion, a source of truth is an explicitly owned, well-managed, semantically valid data asset that represents an accurate representation of real-world entities or events reflected in code. In the traditional on-premise Data Warehouse, an experienced data architect was responsible for defining the source of truth in a monolithic environment.


Revolutionizing the Nine Pillars of SRE With AI-Engineered Tools

Applying AI to SRE is a complex process with certain challenges. Here are some potential pitfalls along with ways to address them: Lack of Quality Data: AI and machine learning models are only as good as the data they are trained on. Inadequate or poor quality data can lead to inaccurate predictions and insights; Prioritize data hygiene and governance. Collect comprehensive and diverse data from your systems; ensure that it is well-structured and free of errors and store it in a way that’s easily accessible for training AI models; Over-reliance on Automation: While AI can greatly enhance automation, relying on it too heavily without human oversight can lead to missed signals or overcorrections in response to false positives; Maintain a balance between automation and human oversight. Use AI to support decision-making, not replace it entirely. It’s important to have experienced SREs review AI outputs regularly to ensure they make sense and are beneficial; Underestimating the Need for AI Expertise: Implementing AI is not just about buying and deploying a tool. 


LockBit Hits TSMC for $70 Million Ransom: What CIOs Can Learn

TSMC has not given any public indication of how it plans to respond to LockBit’s demand. Bill Bernard, area vice president of cybersecurity company Deepwatch, believes it is unlikely the chipmaker will give in and pay the ransomware gang. “They’re claiming very publicly that the data gathered was not damaging to their ability to do business or to their customers. If true, there’s very little motivation for them to pay this extortion,” he tells InformationWeek. Refusal to pay would be a part of a larger trend observed over the past year or so, according to Bernard. He notes there have been “…more attempted ransomware events, but fewer payouts as businesses see the cost of recovery being significantly less than the cost of the ransom.” Even if refusal to pay is the less expensive option, companies still face consequences in the wake of an attack like this. “If TSMC opts not to pay, it could face short-term operational disruption, potential data loss, and the leak of sensitive information, damaging its reputation and breaching customer trust,” explains Ani Chaudhuri, CEO of data security company Dasera.


Why Are Team Topologies Essential for Software Architecture and Software Development 

Efficiency?"Team Topologies" suggests leveraging Conway's Law as a strategic advantage in software architecture. The book proposes that architects can encourage or discourage certain types of designs by shaping the organization and team structures. As Ruth Malan points out, "If we have managers deciding which services will be built, by which teams, we implicitly have managers deciding on the system architecture." This reinforces the critical role of architects and engineering professionals in actively structuring team topologies and their communications and responsibilities. Unfortunately, in many companies, team topologies are determined without adequately considering the expertise of architects and engineering professionals. This lack of involvement can lead to architectural misalignments and inefficiencies. To ensure successful architectural outcomes, it is crucial for organizations to actively involve architects and engineering professionals in decisions related to team topologies. Their knowledge and insights can help shape team structures that align with architectural goals and foster effective communication and collaboration.


4 tips to improve employee experiences while maintaining security and governance

IT security leaders recognize that cyberthreats and attack vectors continually evolve. However, staying ahead of cybercriminals is not Job 1 for employees who simply want to get their work done. Within that context, it’s important to maintain regular, ongoing education and training, said the experts: “Continuously educate and engage. Regularly communicate with employees about the importance of security and governance controls. Offer training sessions, workshops, and awareness programs to educate employees on best practices.” ... In this regard, the enterprise browser can serve as a point of dialog between IT and business users to better understand each other’s needs. “No one wants to be blocked from accessing a particular app or website,” said Lorena Crowley, Head of Chrome Enterprise Marketing at Google. “The browser becomes an educational opportunity for users to learn why an extension is blocked, and for admins to learn about why an extension or website is important for users to get their work done.”


Slimming Down .NET: The Unofficial Experiments of Michal Strehovský

This episode features an interview with Michal Strehovský, a developer on the .NET runtime team who has been experimenting with reducing the size of .NET applications. Strehovský’s experiments have led him to create BFlat and Flattened.NET, personal projects that allow .NET developers to play with the technology and non-.NET developers to get into .NET. One of his experiments involved creating a self-contained WinForms Snake game in C# that was under 8KB in size. By using unsupported territories like ahead-of-time compilation and trimming, and even writing his own core library to work around missing pieces of the runtime, Strehovský was able to achieve this impressive feat. The standard .NET publishing process includes the entire runtime and base class libraries, resulting in a large executable, but trimming can be used to remove unnecessary components. However, the runtime itself cannot be trimmed. Native AoT can be used to compile the entire app ahead of time, resulting in a smaller runtime and smaller app size.



Quote for the day:

"Learning is a lifetime process, but there comes a time when we must stop adding and start updating." -- Robert Brault