Daily Tech Digest - July 10, 2023

Digital Humans: Fad or Future?

A digital human is a computer-generated entity that looks, behaves, and interacts like a real human. “To create a digital human, advanced technologies such as artificial intelligence, machine learning, and natural language processing are used to replicate the complexities of human thought and behavior,” says Matthew Ramirez, a technology entrepreneur and investor. Going beyond concierge services, digital humans could eventually play important roles in areas as diverse as education, healthcare, and entertainment. ... Although digital humans promise multiple benefits, they also present a potential threat. They could be misused in various ways to mislead, defraud, or even physically harm people, Ramirez warns. “It’s crucial to be cautious and consider the negative consequences when creating digital humans, just like with any new technology,” he says. Improvements to generative AI programs are making digital humans more realistic, which increases the possibility that consumers may have difficulty distinguishing when they’re talking to a real human versus a digital human, Bechtel says.


Feds Urge Healthcare Providers, Vendors to Use Strong MFA

CISA recommends that entities implement phishing-resistant multifactor authentication, which can help detect and prevent disclosures of authentication data to a website or application masquerading as a legitimate system, the HHS bulletin says. For instance, phishing-resistant multifactor authentication could require a password or user biometric data, combined with an authenticator such as a personal identity verification card or other cryptographic hardware or software-based token authenticator, such as FIDO with WebAuthn authenticator, according to the bulletin. "The layered defense of a properly implemented multifactor authentication solution is stronger than single-factor authentication such as relying on a password alone," HHS OCR wrote. Walsh suggested that healthcare sector entities consider integrating password vaults with MFA. Also, "passwordless authentication is probably in the future but we haven’t seen it implemented in healthcare," he said. But the bottom line, he added, is that "any MFA is probably better than no MFA."


Generative AI is coming for your job. Here are 4 reasons to get excited

Yes, the fast-emerging technology could replace some workplace activities, but it's up to us to make sure its exploitation is focused on removing repetitive tasks, such as scanning spreadsheets for data-entry errors. "I think we should be excited because it has potential to allow us to do more of the high-value things in our work, and less of the stuff that doesn't need valuable thought processes," she says. Furby says it's important to recognize that the introduction of generative AI should not be seen as an endpoint, but as a pathway to increased productivity. ... AI's ability to pick up large chunks of the work associated with everyday activities could free up internal staff to focus on more innovative and interesting projects. "I think that's always a challenge in terms of how you become more efficient in the things that you can do, and how you can approach more topics and scale at speed. And I think that's where the excitement is – generative AI could help us." For all his enthusiasm for emerging technology, Langthorne doesn't want to dismiss the concerns of people who are worried about the rise of generative systems, such as ChatGPT.


UK regulator refers cloud infrastructure market for investigation

The news comes three months after Ofcom raised “significant concerns” about Amazon Web Services (AWS) and Microsoft, alleging that they were harming competition in cloud infrastructure services and abusing their market positions with practices that make interoperability difficult. Ofcom defines cloud infrastructure services as those which are built on physical servers and virtual machines hosted in data centers and consisting of infrastructure as a service (IaaS) products, such as storage, computing and networking, and platform as a service (PaaS), which includes the software tools needed to build and run applications. When the initial investigation was launched, Ofcom said that AWS and Microsoft Azure had a combined UK market share of between 60% and 70%, while the next nearest competitor, Alphabet-owned Google, has a 5% to 10% share. Consequently, between 2018 and 2021, the percentage of cloud providers that were not AWS, Microsoft, or Google fell from 30% to 19%, causing Ofcom to note that such levels of market dominance could potentially make it harder for smaller cloud providers to compete with the market leaders, further consolidating the big providers' revenue and market share.


6 business execs you’ll meet in hell — and how to deal with them

Some executives have exactly zero aptitude when it comes to the technology that enables them to run their businesses. And you probably shouldn’t expect them to, says Bob Stevens (not his real name), former CISO for a large retail operation. After all, they’re not being paid to think about technology; they’re being paid to sell products. “The CEO at that retail company was not a technologist,” says Stevens. “He found it totally uninteresting. So when the IT and security teams would present, his attention would quickly wane and he would start answering texts and reading email. He’d say, ‘Unfortunately, technology means nothing to me. I get that it is important to the company and that we have to have it. So I will manage the business value against the cost. Just don’t try to make me understand it.’” It can be demoralizing, Stevens adds. Worse, because senior leadership doesn’t fully understand the issues in play or the threats to the business, they may not prioritize investments appropriately. 


Greatest cyber threats to aircraft come from the ground

From a CISO's perspective, what matters is not that a specific security vulnerability was found in a particular model of aircraft, but rather the general idea that modern aircraft with interconnected IT networks could potentially allow intrusions into high security avionics equipment from low security passenger internet access systems. This being the case, the time has come for all onboard aircraft systems -- including avionics -- to be regarded as being vulnerable to cyberattacks. As such, the security procedures for protecting them should be as thorough and in-depth "as any other internet-connected device," Kiley says. "The disclosure I did in 2019 was the first major one that involved the industry, the airlines, and the US government cooperating to ensure that the disclosure was done responsibly and following security industry best practices. This should be a model for how to alert the industry of an issue responsibly." Unfortunately, "Many manufacturers in the aviation industry do not understand how to work with security researchers and instead attempt to stifle research by threatening action instead of working together to solve identified issues," observes Kiley.


Monolith or Microservices, or Both: Building Modern Distributed Applications in Go with Service Weaver

Google’s new open source project Service Weaver provides the idea of decoupling the code from how code is deployed. Service Weaver is a programming framework for writing and deploying cloud applications in the Go programming language, where deployment decision can be delegated to an automated runtime. Service Weaver lets you deploy your application as monolith and microservices. Thus, it’s a best of both world of monolith and microservices. With Service Weaver, you write your application as modular monolith, where you modularise your application using components. Components in Service Weaver, modelled as Go interfaces, for which you provide a concrete implementation for your business logic without coupling with networking or serialisation code. A component is a kind of an Actor that represents a computational entity. These modular components, which built around core business logic, can call methods of other components like a local method call regardless of whether these components are running as a modular binary or microservices without using HTTP or RPC. 


Private 5G/LTE growing more slowly than expected

The use cases for private cellular networks are numerous and varied, according to IDC, encompassing everything from wide-area applications like grid networks for utility systems and transport networks to local networks for manufacturing facilities or warehouses. Yet three factors have continued to slow the growth of private cellular, which IDC defines as 5G/LTE networks that don’t share traffic between users, as a public network would. The first is slower-than-expected availability of the latest 5G chipsets, specifically those for releases 17 and 18 from 3GPP — the cellular technology standards body — which are designed to improve ultra-reliable, low-latency communications. That creates a drag on particularly advanced new implementations, particularly in the industrial sector, that can be created with private networks, the report said. In the short-term, that means that LTE will account for the bulk of spending on private cellular networks, according to the report, not to be superseded by 5G spending until 2027. Difficulties with integrating private cellular into existing network infrastructure is also slowing growth, IDC noted. 


Red Hat kicked off a tempest in a teapot

We never seem to learn from history. I was part of the United Linux effort in the early 2000s while working at Novell. Scared by Red Hat’s early popularity, a group of would-be contenders to the Red Hat throne, including SUSE, Turbolinux, Conectiva, and Caldera (which became SCO Group), banded together to try to define a common, competitive distribution. It failed. Completely. As I’ve written, “It turns out the market didn’t want a common Linux distribution created by committee. They wanted the industry standard, which happened to be Red Hat.” Fast forward to 2023, and no one is clamoring for a resurrected United Linux, but CentOS had become a way for people to use RHEL without paying for it. It was, in some ways, a United Linux that actually worked, as it gave the companies behind Rocky and Alma Linux a way to compete without contributing. Now that’s gone, and there’s much hand-wringing over how hard it will be to continue delivering Red Hat’s product for free. Rocky Linux assures us it will be possible, in a poorly named post about this “Brave New World.”


Who Should Pay for Payment Scams - Banks, Telcos, Big Tech?

"The banking sector is the only sector reimbursing at the moment, and our belief is that the burden should be spread. I think tech companies should be putting their hands in their pockets, particularly as they profit from it," said David Postings, chief executive of UK Finance. In a letter last week to Prime Minister Rishi Sunak, a group of major U.K. banks said technology companies must contribute to the cost of the online fraud "pandemic" that is undermining international investor confidence in the U.K. economy, according to a report on Sky News. It makes sense for social media companies and others to be held accountable for scams. Users of Facebook, Instagram, Twitter and other platforms have fallen prey to romance scams, cryptocurrency investment scams and more. But before the government starts looking for ways to ask big tech to contribute, let's not forget about the victims. It might be difficult to prove which platform is liable and for how much. Social media conversations are often fluid and move from one platform to another. Tracing back the conversation and then establishing the responsibility across banks and tech companies could take time. 



Quote for the day:

"Leadership is a two-way street, loyalty up and loyalty down." -- Grace Murray Hopper

Daily Tech Digest - July 09, 2023

Data should be a first-class citizen in the cloud

A close cousin of the interoperability problem, data access and control are limited in many cloud environments if not designed properly and can prevent organizations from truly harnessing their business data. There doesn’t seem to be a middle ground here; either data is entirely accessible or not at all. Mostly, the controller is turned off and valuable data goes unleveraged and systems are underoptimized. You only need to look at the rise of generative AI systems to understand how this limitation affects the value of these systems. If the data is not accessible, then the knowledge engines can’t be appropriately trained. You’ll have dumb AI. This lack of control is due to opaque data ownership models and limited data processing and storage control. The solution is for organizations to create greater transparency and control over their data. This includes defining access privileges, managing encryption, and deciding how and where data is stored. This would ensure that data owners retain sovereignty and information is still available.


Where Data Governance Must Focus in AI Era

In recent years, the ethical implications of AI have come to the forefront of public discussion. Data governance reinforces the importance of adhering to ethical practices in the development and deployment of AI systems. Transparency and accountability should be the pillars upon which AI technologies are built. Generative AI and large language models have the ability to create and manipulate human-like content. This power must be wielded responsibly. Data governance requires developers and organizations to embed ethical guidelines within the AI systems themselves, ensuring that these technologies align with society’s values and do not increase biases or the delivery of misinformation. ... Data governance recognizes the importance of individual autonomy in an AI-driven world. It seeks to empower individuals with the ability to exercise control over their own data and determine how it is utilized. By placing decision-making power in the hands of data subjects, we uphold the fundamental principles of self-determination and personal agency.


The Need for Risk-Based Vulnerability Management to Combat Threats

In comparison to traditional and outdated approaches to vulnerability management, a risk-based strategy enables organizations to assess the level of risk posed by vulnerabilities. This approach allows teams to prioritize vulnerabilities based on their assessed risk levels and remediate those with higher risks, minimizing potential attacks in a way that is hassle-free, continuous, and automated. Over 90% of successful cyberattacks involve exploitation of unpatched vulnerabilities and in result the demand for automated patch management solutions is increasing as organization seeking a smarter and more efficient vulnerability remediation strategy than those employed in the past. ... In the face of today’s threats, it is crucial to have actionable insights based on risk that can drive security remediation efforts forward. By continuously assessing your entire attack surface, Outscan NX tools can pinpoint the most pressing threats, saving your security team valuable time and resources. The Outscan NX are a comprehensive suite of internal and external network scanning and cloud security tools customized to suit the unique needs of your organization.


13 go-to podcasts that inspire IT industry leaders today

Risky Business is a weekly cybersecurity news and current events podcast hosted by Patrick Gray and Adam Boileau. I listen to it because they do an excellent job curating the most relevant news and events in cybersecurity that occurred in the previous week. Gray is a journalist with deep cybersecurity knowledge and Boileau is an executive director at a cybersecurity firm, so the presentation is professional and includes insights on threat actors and motivations. ... I find Gartner’s CIO Mind podcast to be especially insightful and relevant to the work I’m doing. It covers a wide range of topics that CIOs are grappling with, from the recession and cost-cutting, to staffing specialized IT roles and employee retention. It keeps me tuned in to what others in the industry care about and what keeps them up at night, and it gets me thinking about ways I can improve my own organization so we can better support our clients. The podcast also shares advice from Gartner analysts and other experts that I can apply to my own organization and leverage to prepare for what’s coming, such as generative AI, workforce trends, research and development investment trends, and more.


IoT brings resource gains, sustainability to agriculture

Long-range, low-power wireless solutions equip farmers with the data they need in order to achieve their goals of increasing yield and minimizing environmental impact. Lacuna Space is expanding Long-Range WAN (LoRaWAN) coverage with satellites and LoRa technology to increase connectivity for low-coverage areas. With the ability to have reliable connectivity despite location, more farmers around the world can gather data that enables them to make informed decisions about irrigation, fertilization and more to improve crop yield and monitor water usage. Farmers in areas without cellular or Wi-Fi signals can now receive the same technological advancements as those in more connected areas. This supports smarter agricultural practices throughout the world, bringing access to tools that improve operations and crop yield to more individuals in the industry. WaterBit, a precision agriculture irrigation company, gives farmers the ability to have real-time, low-cost IoT sensing systems that improve crop quality and yield through optimized resource use.


Risk Assessment Using Blockchain

Blockchain technology promises new ways to conduct risk assessments; it helps to create a distributed, transparent, and tamper-proof system for assessing risks. Not only can this standardize and streamline the process but also improve the accuracy and reliability of results. A point to note is that blockchain can only increase accuracy and make the process more efficient. It cannot replace human judgment and auditing expertise. It can enhance the auditing process by ensuring the integrity of transactions’ and events’ records. ... Decentralized data storage eliminates the chances of a single point of failure, along with reducing the risk of data loss or corruption. One of the key advantages of using blockchain technology is that it allows for decentralized data storage. During risk assessments, information collected can be stored on the blockchain, making it more secure and less vulnerable to attack. Additionally, the distributed nature of blockchain technology means that multiple stakeholders can access and update the data, improving collaboration and ensuring that everyone is working from the same information.


How can organizations maintain data governance when using generative AI?

The key to the reliability and trust of generative AI responses is combining them with cognitive enterprise search technology. As mentioned, this combination generates responses from enterprise data, and users can validate the information source. Each answer is provided in the user’s context, always accounting for data permissions from the data source with full compliance. In addition, these tools ensure data is consistently up-to-date by delta crawling. Integrating generative AI tools into a trusted knowledge management solution allows employees to see which documents their information came from and even provide further explanations.  ... Firstly, leadership must evaluate the potential impact of the generated content on the organization’s reputation, brand image, and the effectiveness it will have on the specific business unit. Legal and ethical implications and ensuring compliance with regulations and guidelines are necessary considerations, just like any other deployed technology.


Responsible tech ecosystems: pragmatic precursors to responsible regulation

Regulatory technology (regtech) is typically two-fold: compliance tech when regulated firms use it and supervisory tech when regulators use it. As regulators monitor and enforce compliance, regtech presents new opportunities to formulate frameworks. For instance, today, AI activities of market firms are governed under disparate regulations such as data protection, consumer protection, financial services regulations etc. However, threats of unfairness, explainability and accountability are yet to be addressed. Regulatory gaps expose unmitigated risks that supervisory technology can resolve. In a perfect world, even without prompts from regtech, organizations should adopt measures to address these gaps and work towards diversity and transparency, which have a direct impact on their AI models. Not every innovation or its ensuing disruption needs to be welcomed. We see this in the raging debate in the AI & Art spaces. Any regulator has the moral obligation to react to emerging technology, even if post-facto. 


Crossing the Data Divide: Closing Gaps in Perception of Data as Corporate Asset

What I am suggesting is that our data leaders need to elevate their vision and messaging to describe a new type of system that is the authoritative reference for all enterprise data assets. This new type of system needs to take its place next to the ERP, CRM, and HRM systems within the enterprise. This means it must provide value for everyone, both technical and non-technical, and also provide context for data assets that include its trustworthiness, source, owner, experts, reviews, and much more, all wrapped in a consumer-grade user interface experience. What is this system? I call it a social data fabric (SDF). That term has been used lightly in the social media world, but I am commandeering it for our purposes. I define an SDF system as a combination of an enterprise data catalog and an internal marketplace where employees can explore and ‘shop’ for data. The catalog portion of the system should ingest and manage a broad number of data, business intelligence, and data-related assets such as term glossaries, KPIs, analytic models, and business processes. 


Executive Q&A: Controlling Cloud Egress Costs

For smaller enterprises, egress charges are fairly minimal as most data resides in a single cloud region and is accessed within that region. For larger enterprises, the number of scenarios which incur egress fees is higher. One such scenario is implementing a hybrid cloud for cost management or a multicloud to make use of the latest optimized computing hardware that might not be available in the primary cloud. For these scenarios, egress fees might be as high as a third of the cloud service expense with naive implementations. More optimal implementations can bring down the egress cost but still fall short as more management complexity is introduced and operations staff needs to be hired to compensate. The reason such fees come as a surprise is that it's hard to predict how much data is going to be accessed across regions, and usually this number only increases with time. ... Moving raw data across network boundaries is infeasible. Building a federation layer to query across all curated data is key. 



Quote for the day:

"Power should be reserved for weightlifting and boats, and leadership really involves responsibility." -- Herb Kelleher

Daily Tech Digest - July 08, 2023

10 ways SecOps can strengthen cybersecurity with ChatGPT

ChatGPT is proving effective at predicting potential threat and intrusion scenarios based on real-time analysis of monitoring data across enterprise networks, combined with the knowledge base the LLMs supporting them are constantly creating. One CISO running a ChatGPT pilot says the goal is to test whether the system can differentiate between false positives and actual threats. The most valuable aspect of the pilot so far is the LLMs’ potential in analyzing the massive amount of threat intelligence data the organization is capturing and then providing contextualized, real-time and relevant insights to SOC analysts. ... Knowing that manual misconfigurations of cybersecurity and threat detection systems are one of the leading causes of breaches, CISOs are interested in how ChatGPT can help identify and recommend configuration improvements by interpreting the data indicators of compromise (IoCs) provided. The goal is to find out how best to fine-tune configurations to minimize the false positives sometimes caused by IoC-based alerts triggered by a less-than-optimal configuration.


The Interplay of IGA, IAM and GRC for Comprehensive Protection in Cloud Transitions

Managing user access in separate applications that each have their own security rules can be tricky. Consider an example of an employee who has had different roles in the same organization over time. With each new role, this person might have gained more security permissions in systems such as JD Edwards or SAP. The more permissions they have, the higher the chance of fraud or breaking a segregation of duties (SoD) rule, which says that no one person should have control over 2 conflicting business tasks. To make this example even clearer, imagine that this employee also has access to a different system, such as PeopleSoft, because of work on a project. Now they have access across multiple systems, and keeping track of what they can do becomes more challenging. ... There are tools that can help lower this risk by displaying details about user access and what the users are doing with their access, but often, these tools only show part of the picture, especially when it comes to complex security models and multiple applications, or are siloed into addressing only a singular application.


Applying the MACH Architecture: Lessons Learned

By designing APIs first, they were able to ensure a smoother, more cohesive development process. This approach has enabled them to take advantage of the robust capabilities of their API gateway, streamlining their processes and fostering efficient communication between various teams. The shift to a cloud-native approach, leveraging SAP-managed cloud, private and public clouds, has enhanced their scalability and flexibility while reducing operational overhead. The combination of these approaches has resulted in a highly efficient, reliable, and scalable e-commerce platform. Embracing headless architecture has led to a transformation in their front-end development. By decoupling the front end from the backend, they have made it easier to make changes and updates to their Angular-based frontend applications, leading to a better user experience. ... Furthermore, the ability of MACH architecture to handle peak loads effectively is particularly relevant in the e-commerce industry. 


How to cultivate a culture of continuous cybersecurity improvement

The interplay between real-time and periodic security practices is central to effective vulnerability management. Since each has its own unique value proposition, a robust cyber defense strategy must blend both types of practices into a unified approach. Real-time security practices are indispensable in a world where threats emerge and evolve in a blink of an eye. For instance, endpoint detection and vulnerability detection must be ongoing processes. They offer a pulse on the network, alerting organizations to threats as they surface. A lapse in real-time activities can spell disaster: recent ransomware attacks have demonstrated that vulnerabilities can be exploited in mere hours, and sometimes less. An effective real-time security system provides the crucial window needed to detect and rectify vulnerabilities before they’re exploited. On the other hand, periodic security practices, such as penetration testing, provide an opportunity to stress-test the system and uncover potential weaknesses. Still, their value should not be overstated. 


Data is not a Microservice

The purpose of a microservice is to power an aspect of some customer experience. Its primary function is operational. The purpose of data is decision-making. Its primary function is TRUTH. How that truth is used can be operational (like an ML model) or analytical (answering some interesting question). Businesses already collect large volumes of data at tremendous speed and dump raw logs into lakes for data engineers to sort through later. Data developers struggle because the data they have taken dependencies on has no ownership, the underlying meaning is not clear, and when something changes from a source system very few people know why and what they should expect the new 'truth' to be as a result. In data, our largest problems are rooted in a lack of trust. In my opinion, a source of truth is an explicitly owned, well-managed, semantically valid data asset that represents an accurate representation of real-world entities or events reflected in code. In the traditional on-premise Data Warehouse, an experienced data architect was responsible for defining the source of truth in a monolithic environment.


Revolutionizing the Nine Pillars of SRE With AI-Engineered Tools

Applying AI to SRE is a complex process with certain challenges. Here are some potential pitfalls along with ways to address them: Lack of Quality Data: AI and machine learning models are only as good as the data they are trained on. Inadequate or poor quality data can lead to inaccurate predictions and insights; Prioritize data hygiene and governance. Collect comprehensive and diverse data from your systems; ensure that it is well-structured and free of errors and store it in a way that’s easily accessible for training AI models; Over-reliance on Automation: While AI can greatly enhance automation, relying on it too heavily without human oversight can lead to missed signals or overcorrections in response to false positives; Maintain a balance between automation and human oversight. Use AI to support decision-making, not replace it entirely. It’s important to have experienced SREs review AI outputs regularly to ensure they make sense and are beneficial; Underestimating the Need for AI Expertise: Implementing AI is not just about buying and deploying a tool. 


LockBit Hits TSMC for $70 Million Ransom: What CIOs Can Learn

TSMC has not given any public indication of how it plans to respond to LockBit’s demand. Bill Bernard, area vice president of cybersecurity company Deepwatch, believes it is unlikely the chipmaker will give in and pay the ransomware gang. “They’re claiming very publicly that the data gathered was not damaging to their ability to do business or to their customers. If true, there’s very little motivation for them to pay this extortion,” he tells InformationWeek. Refusal to pay would be a part of a larger trend observed over the past year or so, according to Bernard. He notes there have been “…more attempted ransomware events, but fewer payouts as businesses see the cost of recovery being significantly less than the cost of the ransom.” Even if refusal to pay is the less expensive option, companies still face consequences in the wake of an attack like this. “If TSMC opts not to pay, it could face short-term operational disruption, potential data loss, and the leak of sensitive information, damaging its reputation and breaching customer trust,” explains Ani Chaudhuri, CEO of data security company Dasera.


Why Are Team Topologies Essential for Software Architecture and Software Development 

Efficiency?"Team Topologies" suggests leveraging Conway's Law as a strategic advantage in software architecture. The book proposes that architects can encourage or discourage certain types of designs by shaping the organization and team structures. As Ruth Malan points out, "If we have managers deciding which services will be built, by which teams, we implicitly have managers deciding on the system architecture." This reinforces the critical role of architects and engineering professionals in actively structuring team topologies and their communications and responsibilities. Unfortunately, in many companies, team topologies are determined without adequately considering the expertise of architects and engineering professionals. This lack of involvement can lead to architectural misalignments and inefficiencies. To ensure successful architectural outcomes, it is crucial for organizations to actively involve architects and engineering professionals in decisions related to team topologies. Their knowledge and insights can help shape team structures that align with architectural goals and foster effective communication and collaboration.


4 tips to improve employee experiences while maintaining security and governance

IT security leaders recognize that cyberthreats and attack vectors continually evolve. However, staying ahead of cybercriminals is not Job 1 for employees who simply want to get their work done. Within that context, it’s important to maintain regular, ongoing education and training, said the experts: “Continuously educate and engage. Regularly communicate with employees about the importance of security and governance controls. Offer training sessions, workshops, and awareness programs to educate employees on best practices.” ... In this regard, the enterprise browser can serve as a point of dialog between IT and business users to better understand each other’s needs. “No one wants to be blocked from accessing a particular app or website,” said Lorena Crowley, Head of Chrome Enterprise Marketing at Google. “The browser becomes an educational opportunity for users to learn why an extension is blocked, and for admins to learn about why an extension or website is important for users to get their work done.”


Slimming Down .NET: The Unofficial Experiments of Michal Strehovský

This episode features an interview with Michal Strehovský, a developer on the .NET runtime team who has been experimenting with reducing the size of .NET applications. Strehovský’s experiments have led him to create BFlat and Flattened.NET, personal projects that allow .NET developers to play with the technology and non-.NET developers to get into .NET. One of his experiments involved creating a self-contained WinForms Snake game in C# that was under 8KB in size. By using unsupported territories like ahead-of-time compilation and trimming, and even writing his own core library to work around missing pieces of the runtime, Strehovský was able to achieve this impressive feat. The standard .NET publishing process includes the entire runtime and base class libraries, resulting in a large executable, but trimming can be used to remove unnecessary components. However, the runtime itself cannot be trimmed. Native AoT can be used to compile the entire app ahead of time, resulting in a smaller runtime and smaller app size.



Quote for the day:

"Learning is a lifetime process, but there comes a time when we must stop adding and start updating." -- Robert Brault

Daily Tech Digest - July 06, 2023

Man vs machine: a secure email firm aim to bring post-quantum cryptography to the cloud

"While quantum computers will soon be able to decrypt 'normally' encrypted data quite easily, they will cut their teeth on post-quantum secure encryption," said Pfau. Tutanota's plan is using a hybrid encryption approach—at first, at least. All data will be encrypted using both classical and the new post-quantum proof algorithms. This double protection will make sure that the new algorithms have time to prove themselves as actually safe. PQDrive is the last step into Tutanota's post-quantum challenge. The company started its mission three years ago with PQMail to make both their email and calendar apps both post-quantum resistant. The team has already begun to add the new algorithms into the software, which should be fully updated for all its 10 million users by 2024. Pfau is very happy that the algorithms the team chose to work with years ago were awarded among the best choice of secure post-quantum encryption by the National Institute of Standards and Technology (NIST).


To close the skills gap, stop focusing on skills candidates don’t have

Modern hiring strategies should be designed to bring employment opportunities to under-represented talent communities, including people with disabilities, women of colour and members of the military and their spouses. Bridging the gap between under-represented communities and equitable job opportunities can help fuel growth and close the talent gap. Likewise, it’s important to work with likeminded organizations that operate with under-represented communities to empower the next generation of skilled workers by giving back and creating new career pathways for them. For example, since 2015, Equinix has partnered with World Pulse, a global, online community that connects and amplifies women’s voices, as well as provides digital empowerment training. The partnership can create a new career pathway for women around the world with digital skills and resources and help close the digital divide’s gender disparity in an organic, grassroots way to maximize the impact within these communities.


New Chinese Counterespionage Law Aimed at US Tech Sector

The revised law grants "state security organs," the armed forces, the CCP and public institutions the power to proactively respond to all forms of network attacks, attacks on critical information infrastructure, and those that aim to obstruct, control or disrupt government functions. It also gives the government power to take legal action against foreign institutions suspected of carrying out espionage activities. "Acts of espionage endangering the PRC's national security that are carried out, instigated or funded by foreign institutions, organizations or individuals, or that are carried out by domestic institutions, organizations or individuals colluding with foreign institutions, organizations or individuals must be legally pursued," it reads. The revised law also gives agencies the power to "inspect the electronic equipment, facilities and related programs and tools of relevant individuals and organizations," and seal or seize property if the entity under investigation fails to employ immediate corrective measures.


5 ways to boost server efficiency

According to the Uptime Institute report, power management can increase latency by 20 to 80 microseconds, which is unacceptable for some types of workloads, such as financial trading. "And there are some applications where you might decide not to use it because it will cause performance or response time problems," he says. But there are other applications where delays won’t have a business impact. "The biggest mistake is that some operators are risk averse," he says. "They think that if they're going to save a couple of hundred bucks a server on their energy bill but are risking breaking their SLA which will cost them a million dollars, they're not going to turn [power management] on." Dietrich recommends that when companies buy new servers and run their performance tests, make sure they test whether power management affects the applications adversely or not. "If it doesn't bother them, then you can use power management," he says. "You can implement a set of power-management functions that will let you save energy and still provide response time and performance that your customers want."


CIOs, Heed On-Premises App and Infrastructure Performance

As more applications run across on-premises and cloud environments, IT teams responsible for managing availability and performance face significant challenges. Today, most IT departments use separate tools to monitor on-premises and cloud applications, which brings a lack of visibility across the entire application path in hybrid environments. IT leaders can’t visualize the path up and down the application stack and they can’t derive business context, making it virtually impossible to troubleshoot issues quickly. This leaves them in a firefighting mode to solve issues before they affect end users. An IT department’s worst nightmare, like an outage or even damaging downtime, surges when metrics such as MTTR and MTTX inevitably rise. To avoid these issues, IT teams require an observability platform for unified visibility across their entire IT estate. Through this platform, IT leaders can access real-time insights of IT availability and performance across both on-premises and public cloud environments and are able to correlate IT data with real-time business metrics, allowing them to prioritize issues that matter most to customers and the business.


IBM shutters Cloud for Education service just two years after launch

IBM didn’t really give any official reason for the closure, saying simply that it regularly evaluates its cloud service offerings while keeping things like customer requirements and consumption in perspective. The service will continue to operate as normal until Nov. 30, and customers are being invited to talk with IBM’s representatives about the steps they can take to migrate their data and workloads to an alternative platform. Holger Mueller of Constellation Research Inc. told SiliconANGLE that Cloud for Education clearly wasn’t as successful as the company had hoped it would be, because it wouldn’t retire the offering otherwise. “But it’s good to see IBM is retiring the service in a respectful way, giving its customers several months to work out how they’re going to migrate their workloads to an alternative platform,” Mueller said. “Generally, most cloud vendors will only give their customers 30 days notice when they decide to sunset a service.” The shutdown may have a somewhat negative impact on IBM’s cloud reputation though, given how it has struggled to achieve the same kind of success as its rivals, Amazon Web Services Inc., Microsoft Corp., Google LLC and even Oracle Corp.


Making intelligent automation work at scale

“We continue to make significant progress in operating with a digital-first mindset and reimaging our end-to-end processes with IA,” says Ajay Anand, vice president of strategy and business services for Global Services at J&J. “We are using insights from our IA maturity assessment efforts to identify large untapped value pools to drive visibility with our executive committee and functional leaders,” Anand says. “In addition, we are also focused on developing a framework for generative AI use case development and prioritization.” The enterprise IA program is delivering on “experience, effectiveness, and efficiency — giving our employees more time to focus on creative innovations and upskilling,” says Steve Sorensen, vice president of technology services, supply chain, data integration, and reliability engineering at J&J. “It is enabling the reimagining, simplifying, and digitizing processes for employees, patients, healthcare professionals, and other stakeholders, while delivering significant value for the organization.”


How Much Architecture Modeling Should You Do? Just Enough – Part 1

The fundamental challenge with JBGE is that it is situational. For example, I often draw a diagram on a whiteboard to explore complex logic and then discard it once I’m done with it. In this case a whiteboard diagram is fine because it helps me to solve the issue which I’m thinking through with whomever I’m working. But, what if we’re in a situation where we’ll need to update this logic later AND will want to do it via the diagram instead of via source code? Clearly a hand-drawn sketch isn’t good enough in this case and we’ll want to create a detailed diagram using a sophisticated software-based tool. We’d still create an agile model even though it is much more sophisticated than a sketch because JBGE reflects the needs of the situation. To determine if an architecture model is JBGE you must actively work with the direct audience of that artifact. In the case of a business architecture model this would be both your business stakeholders and the implementation team(s) that are going to work with the model. Without knowing what the audience wants, you cannot realistically create something which is JBGE, putting you in a situation where you’re motivated to put far more effort into the artifact than you need to.


Unmasking Deepfakes: Defending Against a Growing Threat

“The same truth about authentication of audio or visual content is true about authentication in the technical systems of identity.” Amper says while the technology is maturing rapidly toward lifelike, intelligent impersonations, the human eye can still spot blurring around the ears or hairline, unnatural blinking patterns, or differences in image resolution. “Color amplification tools that visualize blood flow or ML algorithms trained on spectral analysis are equally effective at detecting and vetting extreme behavior,” he says. He says although contemporary deepfakes are extremely well-done and increasingly hard to recognize, digital identity verification and liveness detection can authenticate a person’s unique identity markers. Once a user has been confirmed as the genuine owner of the real-world identity they are claiming, deep convolutional neural networks can be trained and leveraged for biometric liveness checks including textural analysis, geometry calculation, or traditional challenge-response mechanisms to verify if the person presented on screen is real.


Promoting responsible AI: Balancing innovation and regulation

From a cybersecurity perspective, we must address privacy and security concerns. Bad actors are successfully using confidentiality attacks to draw out sensitive information from AI systems. Without proper security measures, institutions and individuals are at risk. To protect students, for example, institutions may put in place policies curbing the use of AI tools in specific instances or provide educational content cautioning them against sharing confidential information with AI platforms. Algorithmic biases, inaccuracies, overgeneralizations represent intrinsic limitations of the technology since the models are a reflection of the data they are trained on. Even if care is taken to ensure input data is fact-checked and accurate, hallucinations may still occur. Therefore, a human element is still important in the use of AI. Fact checks and discerning eyes can help weed out inaccuracies. Councils guided by community-oriented ethical guidelines can help reduce biases.



Quote for the day:

"Speak softly and carry a big stick; you will go far." -- Theodore Roosevelt

Daily Tech Digest - July 05, 2023

AI gold rush makes basic data security hygiene critical

APIs, in particular, are hot targets as they are widely used today and often carry vulnerabilities. Broken object level authorization (BOLA), for instance, is among the top API security threats identified by Open Worldwide Application Security Project. In BOLA incidents, attackers exploit weaknesses in how users are authenticated and succeed in gaining API requests to access data objects. Such oversights underscore the need for organizations to understand the data that flows over each API, Ray said, adding that this area is a common challenge for businesses. Most do not even know where or how many APIs they have running across the organization, he noted. There is likely an API for every application that is brought into the business, and the number further increases amid mandates for organizations to share data, such as healthcare and financial information. Some governments are recognizing such risks and have introduced regulations to ensure APIs are deployed with the necessary security safeguards, he said. And where data security is concerned, organizations need to get the fundamentals right. 


Microsoft pushes for government regulation of AI. Should we trust it?

By focusing on legislation for the dramatic-sounding but faraway potential apocalyptic risks posed by AI, Altman wants Congress to pass important-sounding, but toothless, rules. They largely ignore the very real dangers the technology presents: the theft of intellectual property, the spread of misinformation in all directions, job destruction on a massive scale, ever-growing tech monopolies, loss of privacy and worse. If Congress goes along, Altman, Microsoft and others in Big Tech will reap billions, the public will remain largely unprotected, and elected leaders can brag about how they’re fighting the tech industry by reining in AI. At the same hearing where Altman was hailed, New York University professor emeritus Gary Marcus issued a cutting critique of AI, Altman, and Microsoft. He told Congress that it faces a “perfect storm of corporate irresponsibility, widespread deployment, lack of regulation and inherent unreliability.” He charged that OpenAI is “beholden” to Microsoft, and said Congress shouldn’t follow his recommendations.


Ghostscript bug could allow rogue documents to run system commands

The problem came about because Ghostscript’s handling of filenames for output made it possible to send the output into what’s known in the jargon as a pipe rather than a regular file. Pipes, as you will know if you’ve ever done any programming or script writing, are system objects that pretend to be files, in that you can write to them as you would to disk, or read data in from them, using regular system functions such as read() and write() on Unix-type systems, or ReadFile() and WriteFile() on Windows… …but the data doesn’t actually end up on disk at all. Instead, the “write” end of a pipe simply shovels the output data into a temporary block of memory, and the “read” end of it sucks in any data that’s already sitting in the memory pipeline, as though it had come from a permanent file on disk. This is super-useful for sending data from one program to another. When you want to take the output from program ONE.EXE and use it as the input for TWO.EXE, you don’t need to save the output to a temporary file first, and then read it back in using the > and < characters for file redirection


Island Enterprise Browser: Intelligent security built into the browsing session

It is essential to begin with the fact that Island policies are straightforward to configure. By the nature of the Application Boundary concept mentioned above, there is usually little need to focus on the painful granular efforts of traditional data protection approaches. Leveraging such facilities will ensure that organizational data remains within the corporate application footprint, allowing data to move freely when desired across that footprint, but can prevent the spillage of corporate data into undesirable places. ... Island has very flexible logging and audit features. Because the browser is a natural termination point for SSL traffic, Island does not have to leverage complex break-and-inspect mechanics required by countless security tools to gain visibility and control. The result is that Island has unimpeded, very natural visibility over application usage. Most importantly, the ability to have dexterity in audit logging delivers complete privacy for the user at the proper times, anonymized but audited logging at other times, and even deep audit over any application engagement at other times.


Get Ahead of the Curve: Crafting a Roadmap to a Successful Data Governance Strategy!

Crafting a seamless data governance plan is crucial for any organization that wants to move from data anarchy to order. A well-designed data governance plan can help ensure that data is accurate, consistent, and secure. It can also help organizations comply with regulatory requirements and avoid costly data breaches. To create a seamless data governance plan, it is important to start by identifying the key stakeholders and their roles in the data governance process. This includes identifying who will be responsible for data management, who will be responsible for data quality, and who will be responsible for data security. Once the key stakeholders have been identified, it is important to establish clear policies and procedures for data governance. This includes defining data standards, establishing data quality metrics, and creating data security protocols. It is also important to establish a system for monitoring and enforcing these policies and procedures. By following these steps, organizations can create a seamless data governance plan that will help them move from data anarchy to order.


History Never Repeats. But Sometimes It Rhymes.

Imagine Red Hat succeeds in eliminating all vendors it calls “rebuilders” from Enterprise Linux. Congratulations, Red Hat! You’re now king of the hill, and all users who want a “true” Enterprise Linux will be purchasing Red Hat subscriptions! What will this do for the Enterprise Linux ecosystem According to Mike McGrath, Red Hat’s Vice President of Core Platforms, this will allow Red Hat to invest all that extra subscription money into creating new and innovative open source software and employing lots of new open source developers. Maybe. But having been in the industry for a long time, my suspicions are that IBM shareholders might have other uses for that money. More likely, in my opinion, is that users, who value freedom and control over their own computing destiny more than anything else, will swiftly migrate off the RHEL platform. Where will they go? That’s where my crystal ball isn’t so good. Maybe some will go to Debian and derivatives. Some will go to SuSE Enterprise Linux. The short-sighted ones will migrate workloads back to the welcoming arms of Microsoft Windows, or, being more charitable about Microsoft, an Enterprise Linux distribution running on top of Microsoft Azure. 


How to Address AI Data Privacy Concerns

Companies developing AI systems can take several approaches to protecting data privacy. Data scientists need to be educated on data privacy, but company leadership needs to recognize they are not the ultimate experts on privacy. “Companies also can provide their data scientists with tools that have built-in guardrails that enforce compliance,” says Manasi Vartak, founder and CEO of Verta, a company that provides management and operations solutions for data science and machine learning team. “Companies have to deploy a variety of technical strategies to protect data privacy; there is an entire spectrum of privacy preservation technologies out there to address such issues,” says Adnan Masood, PhD, chief AI architect at digital transformation solutions company UST. He points to approaches like tokenization, which replaces sensitive data elements with non-sensitive equivalents. Anonymization and the use of synthetic data are also among the potential privacy preservation strategies. “On the cutting edge, we have techniques like fully homomorphic encryption, which allows computations to be performed on encrypted data without ever needing to decrypt it,” says Masood.


India’s stock market regulator Sebi releases cybersecurity consultation paper

Cybersecurity experts hailed the consultation paper by Sebi as a step in the right direction. "By and large these entities are becoming very fertile targets of continuing cyberattacks and cybersecurity breaches," said Dr. Pavan Duggal, cyber law expert and practicing advocate at the Supreme Court of India, adding that there has been a need felt for quite some time for a consolidated cybersecurity and cyberresilience framework. "Sebi had come up with a cyberresilience framework some years ago, but the intersection of cybersecurity and cyberresilience had not been addressed. It is also an extension of what the existing principles of law are already stating," Duggal said. "Under the new updated IT rules 2023, every regulated entity has to adopt reasonable security practices and procedures to protect third-party data. In Sebi-regulated entities, these could become the parameters of due diligence on cybersecurity," Duggal said, adding that in the absence of a dedicated cybersecurity law and cyberresilience law, the framework assumes more relevance.


Taking the risk out of the semiconductor supply chain

Even before the most recent supply chain challenges, political leaders around the world have been taking a close look at the current semiconductor supply chain model. Semiconductors across the global economy have the potential to shape supply chains for numerous commercial electronics, as well as components essential to critical infrastructures, such as telecommunications and financial services. Perhaps more importantly, the supply of semiconductors has worldwide security implications, affecting national and regional defense and emergency response capabilities. Given its geopolitical impact, many policymakers concluded that the existing semiconductor supply chain model is too risky and are responding accordingly. Some of that risk is being addressed at national and regional levels, such as the U.S. CHIPS Act and the EU Chips Act. However, investments in these initiatives are heavily focused on building new wafer fabrication facilities, or “fabs.” While fabs make up a critical part of the manufacturing process, increased fab production alone cannot better secure the global supply chain.


Are cloud architects biased?

Don’t get me wrong; this does not mean a specific technology stack is incorrect. At issue is that we’ve pulled back from working from the requirements to the solutions, and now things are the other way around. The reasons that many people are “compromised” are easy to define. Everything works. You put up a technology stack to adapt to solve the problem; however, if it’s not the fully optimized solution, it will cost the business millions of dollars over its life cycle, and at some point, it will stop working and will have to be fixed. There is no immediate punishment for picking underoptimized solutions. Therefore, success is declared, and the project leader moves on to other decisions with their bias reinforced by the false perception of success. This dysfunctional process makes things worse and creates so much technical debt. I’m not suggesting that cloud architects are getting money under the table to pick one technology stack over another. I am concerned they have not opened their minds to other options, even significant changes such as leveraging traditional on-premises solutions over cloud-based ones or vice versa.



Quote for the day:

"The litmus test for our success as Leaders is not how many people we are leading, but how many we are transforming into leaders" -- Kayode Fayemi