Daily Tech Digest - July 26, 2023

How digital humans can make healthcare technology more patient-centric

Like humans, digital humans have anatomy. Several technologies are used to create digital humans. The Representation: The “face” of the digital entity can be created in likeness to a real or caricature of a human. The quality of this representation is critical to a successfully designed digital human. Natural Language Processing (NLP) or Natural Language Understanding (NLU): NLP/NLU ensures that the digital human can properly interpret information, such as speech detection, speech-to-text translation, and language recognition and detection. Advanced forms of NLP/NLU will include sign language as well. Cognitive Services: Cognitive services are used for creating personalized communication including language translation, speech synthesis, voice customization, speech prosody and pitch, nomenclature and specialized pronunciation. Artificial Intelligence: The AI layer–whether generative, extractive or other forms–provides contextual conversation response, context recognition and for generative AI, content creation.


CISO to BISO – What's your next role?

The role of a BISO has emerged over the past decade, as organisations recognise the need for dedicated security roles and skills within specific business units or departments. While it is challenging to pinpoint an exact date when the role of BISO became established across all industries, it can be traced back to the increasing emphasis on information security, the evolving nature of cybersecurity threats and the increasingly complex technical infrastructures in use. As businesses have become more digital, data-centric, and interconnected, the complexity and diversity of security risks have grown exponentially with it. Traditional approaches to information security, where the responsibility solely resides with the IT department or a centralised security team, have proved inadequate to address the unique security challenges faced by businesses today. ... When implementing information security in larger organisations, we would look for security champions within operational or support functions. People who showed some kind of interest in the world of cybersecurity usually resulted in them being offered a support role on a voluntary basis. 


Top cybersecurity tools aimed at protecting executives

A recent Ponemon report, sponsored by BlackCloak, revealed that 42% of respondents indicated that key executives and family members have already experienced at least one cyberattack. While it's likely that cybercriminals will target executives and the digital assets they have access to, organizations are not responding with suitable strategies, budgets, and staff, the report found. Just over half (58%) of respondents reported that the prevention of threats against executives and their digital assets is not covered in their cyber, IT and physical security strategies and budget. The lack of attention is demonstrated with only 38% of respondents reporting a dedicated team to prevent or respond to cyber or privacy attacks against executives and their families. The best practice to do this well would be to protect the executive as well as their family, inner circle, and associates with a broad range of measures, Agency's Executive Digital Protection report noted. The solutions need to balance breadth, value, privacy, and specialization, it said. 


How WebAssembly will transform edge computing

As the next major technical abstraction, Wasm aspires to address the common complexity inherent in the management of the day-to-day dependencies embedded into every application. It addresses the cost of operating applications that are distributed horizontally, across clouds and edges, to meet stringent performance and reliability requirements. Wasm’s tiny size and secure sandbox mean it can be safely executed everywhere. With a cold start time in the range of 5 to 50 microseconds, Wasm effectively solves the cold start problem. It is both compatible with Kubernetes while not being dependent upon it. Its diminutive size means it can be scaled to a significantly higher density than containers and, in many cases, it can even be performantly executed on demand with each invocation. But just how much smaller is a Wasm module compared to a micro-K8s containerized application? An optimized Wasm module is typically around 20 KB to 30 KB in size. When compared to a Kubernetes container, the Wasm compute units we want to distribute are several orders of magnitude smaller. 


Data Governance Trends and Best Practices for Storage Environments

The more intelligent the data layer is, the more value the data can provide. More valuable data makes the role of data governance stronger within the organization. Active archive solutions can serve as a framework for data governance by including an intelligent data management software layer that automatically places data where it belongs and optimizes its location based on cost, performance, and user access needs. “Data governance is the process of managing the availability, usability, integrity and security of enterprise data,” said Rich Gadomski, head of tape evangelism at FUJIFILM Recording Media U.S.A. and co-chair of the Active Archive Alliance. ... Supporting active archives with optical disk storage technologies can provide long-term data preservation. These technologies are designed to withstand environmental factors like temperature, humidity, and magnetic interference, ensuring the integrity and longevity of archived data. With a typical lifespan of hundreds of years or more, optical disks are well-suited for archival purposes.


Dr. Pankaj Setia on the challenges that will redefine CIOs’ careers

First, a risk-averse culture may be addressed through a two-pronged approach. First, CIOs must champion training and engagement of employees, to create a digital mindset and enhance understanding of the digital transformation being undertaken. It is imperative that the employees are excited about the transformation. ... A second step for CIOs is to work toward getting buy-in from top management. For CIOs to get desired results, the board and top management team (TMT) must actively champion digital transformation initiatives. Many examples from the corporate world underline the role of top leadership in engaging and motivating employee teams. Second, overcoming the barriers due to siloed strategy is a complex endeavor. It is not always easy to overcome these, as professional management relies on specialization in a functional domain (e.g., marketing, finance, human resources, etc.). However, because digital transformation inherently spans functional domains, siloed strategies — that emphasize super specialization — are not optimal. Therefore, CIOs should look to create cross-functional teams.


Risks and Strategies to Use Generative AI in Software Development

Among the risks of using AI in software development is the potential that it regurgitates bad code that has been making the rounds in the open-source world. “There’s bad code is being copied and used everywhere,” says Muddu Sudhakar, CEO and co-founder of Aisera, developer of a generative AI platform for enterprise. “That’s a big risk.” The risk is not simply poorly written code being repeated -- the bad code might be put into play by bad actors looking to introduce vulnerabilities they may exploit at a later date. Sudhakar says organizations that draw upon generative AI, and other open-source resources, should put controls in place to spot such risks if they intend to make AI part of the development equation. “It’s in their interest because all it takes is one bad code,” he says, pointing to the long-running hacking campaign behind the Solar Winds data breach. The skyrocketing appeal of AI for development seems to outweigh concerns about the potential for data to leak or for other issues to occur. “It’s so useful that it’s worth actually being aware of the risks and doing it anyway,” says Babak Hodjat, CTO of AI and head of Cognizant AI Labs.


Supply Chain, Open Source Pose Major Challenge to AI Systems

Bengio said one big risk area around AI systems is open-source technology, which "opens the door" to bad actors. Adversaries can take advantage of open-source technology without huge amounts of compute or strong expertise in cybersecurity, according to Bengio. He urged the federal government to establish a definition of what constitutes open-source technology - even if it changes over time - and use it to ensure future open-source releases for AI systems are vetted for potential misuse before being deployed. "Open source is great for scientific progress," Bengio said. "But if nuclear bombs were software, would you allow open-source nuclear bombs?" Bengio said the United States must ensure that spending on AI safety is equivalent to how much the private sector is spending on new AI capabilities, either through incentives to businesses or direct investment in nonprofit organizations. The safety investments should address the hardware used in AI systems as well as cybersecurity controls necessary to safeguard the software that powers AI systems.


Zero-Day Vulnerabilities Discovered in Global Emergency Services Communications Protocol

In a demonstration video of CVE-2022-24401, researchers showed that an attacker would be able to capture the encrypted message by targeting a radio to which the message was being sent. Midnight Blue founding partner Wouter Bokslag says that in none of the circumstances for this vulnerability do you get your hands on a key: "The only thing is you're getting is the key stream, which you can use to decrypt, arbitrary frames, or arbitrary messages that go over the network." A second demonstration video of CVE-2022-24402 reveals that there is a backdoor in the TEA1 algorithm that affects networks relying on TEA1 for confidentiality and integrity. It was also discovered that the TEA1 algorithm uses an 80-bit key that an attacker could do a brute-force attack on, and listen in to the communications undetected. Bokslag admits that using the term backdoor is strong, but it is justified in this instance. "As you feed an 80 bits key to TEA1, that flows through a reduction step and which leaves it with only 32 bits of key material, and it will carry on doing the decryption with only those 32 bits," he says.


Enterprises should layer-up security to avoid legal repercussions

There are two competing temptations in the technology landscape that the seasoned security professional must navigate. The first is the temptation to totally trust the power of the tool. An overly optimistic reliance on vendor tools and promises can fail to identify security issues if the tools are not properly implemented and operationalized in your environment. A shiny SIEM tool, for example, is useless unless you have clearly documented response actions to take for each alert, as well as fully trained personnel to handle investigations. The second temptation, which I believe is more prevalent within tech and SaaS companies, is to trust no tool except for in-house tech. The thought process goes as follows: “Since we have a solid development team, and we want to keep a bench of developers for any eventuality, we need to keep their skills sharp, so we might as well build our own tools.” It’s a sound argument — up to a point. However, it may be a bit arrogant to believe your company has the expertise to develop the best-in-class SIEM solutions, ticketing systems, SAST tools, and what have you.



Quote for the day:

"If you don't demonstrate leadership character, your skills and your results will be discounted, if not dismissed." -- Mark Miller

Daily Tech Digest - July 25, 2023

The real risk of AI in network operations

One who tried generative AI technology on their own historical network data said that it suggested a configuration change that, had it been made, would have broken the entire network. “The results were wrong a quarter of the time, and very wrong maybe an eighth of the time,” the operations manager said. “I can’t act on that kind of accuracy.” ... That raises my second point about a lack of detail on how AI reached a conclusion. I’ve had generative AI give me wrong answers that I recognized because they were illogical, but suppose you didn’t have a benchmark result to test against? If you understood how the conclusion was reached, you’d have a chance of picking out a problem. Users told me that this would be essential if they were to consider generative AI a useful tool. They don’t think, nor do I, that the current generative AI state of the art is there yet. What about the other, non-generative, AI models? There are well over two dozen operations toolkits out there that claim AI or AI/ML capability. Users are more positive on these, largely because they have a limited scope of action and leave a trail of decision-making steps that can be checked quickly.


Exploring an Agilist's Story of Perseverance

“There are a few things in technology that help people with MS because they have the same problem with speech, but they’re not really effective for me,” because you still have to go back and edit everything. If you are living with ataxia, “there is a group called the National Ataxia Foundation. It is a great support group; you don’t feel like you are going through this alone. They post things about technology and tools you can use,” Apuroopa said. She also recommends utilizing your HR resources if you have any need for accommodation. An accommodation request form may be the right way to access technology or request an adjustment or change in your work environment or duties based on a medical condition. Apuroopa’s employer offers a work-from-home option. “The remote environment adds complexity,” she said, because not everyone is willing to turn their cameras on for various reasons, and you end up missing that facial connection and body language, but she’s also thankful for the option to stay home.


A critical cybersecurity backup plan that too many companies are ignoring

With a departure of a CISO, there is a loss of valuable institutional knowledge, which can impede an organization’s ability to adapt to rapidly evolving cyber threats, said Daniel Soo, risk and financial advisory principal in cyber and strategic risk at consulting firm Deloitte. “The lack of a successor could disrupt business-as-usual cybersecurity operations, resulting in delays, gaps in critical cyber risk management activities, and hindered cyber incident response and decision-making,” Soo said. In addition, CISO succession planning is key to ensuring that an organization has the right person at the right time to help drive the organization’s cyber objectives, Soo said. ... CISO succession planning should also involve anticipating future security requirements by considering the evolving nature of the business and technology landscape. “CISOs should analyze the security implications of these trends and develop policies, technologies, and skills to address future needs,” he said. “Implementing a training program can help ensure that employees are equipped with the necessary skills to tackle upcoming security challenges.”


Bridging the cybersecurity skills gap through cyber range training

Cyber ranges take traditional cyber training and turn it into real-life, experiential learning so learners can actually apply their knowledge and skills and gain real experience using a simulation method. SOC analysts, who are the last line of defense, need to continually engage in these simulations to strengthen their capabilities and create “muscle memory.” An ongoing cyber range training program with real-life attacks enhances their preparedness as individuals and as a cohesive team through immersive experiences. One thing to note is that not all cyber ranges are equal to each other. They can vary in terms of their purpose, complexity, and available features, tools, and technology. To ensure your team is getting the most effective training, it’s critical to use a dynamic range with live-fire attacks that the whole team can participate in together, versus more of a directed lab environment or individual exercises that team members do in parallel. 


Why cyber security should be part of your ESG strategy

In fact, the investment community has been singling out cyber security as one of the major risks that ESG programmes will need to address due to the potential financial losses, reputational damage and business continuity risks posed by a growing number of cyber attacks and data breaches. Investment firm Nomura already takes into account an investee firm’s cyber security performance in its credit ESG scoring model, while KPMG noted in its report that cyber security is not only applicable to the governance aspects of ESG, but also has social and environmental implications. ... “That trust you want to build from a social standpoint comes from sound cyber security practices, so you can tell customers you’re taking the right steps to protect their identity and financial information,” he added. But even after organisations have identified aspects of their businesses that are at risk, building up their risk profile remains challenging as they are often unaware of what technology assets they have, coupled with the lack of efforts to assess technical risks, Wenzler said.


Boost your tech ROI with Engineering Effectiveness

Learnings from numerous agile, DevOps, and platform transformation projects have shown that the productivity of engineering teams in most organizations is around 30 percent of their total potential. Therefore, a whopping 70 percent improvement is possible, even necessary if you want to keep up with digital-native competitors. You can achieve this by investing in both technology and the development teams themselves. Create an environment equipped with the right platforms, methodologies, and workplace culture that makes teams more productive and helps them collaborate more efficiently. It's also vital to give developers the opportunity and resources to keep their skills up to date. ... The path to modernization is not only about allocating more resources, but fundamentally about transforming business processes and culture. Talent is better utilized when outdated and inefficient workflows are revised. A critical look at the organization, involving senior management, is essential to uncover all bottlenecks. Changing traditional work and thought patterns can be challenging. In such cases, external assistance coupled with tried-and-tested frameworks and tools can be of help. 


Social Intelligence Is the Next Big Step for AI

When it comes to being able to decipher nonverbal cues like body language or facial expressions, AI still lacks many of the social skills that many of us humans take for granted. To help AI develop those social skills, new work from Chinese researchers suggests that a multidisciplinary approach will be needed — such as adapting what we know about cognitive science, and using computational modeling would help us better identify the disparities between the social intelligence of machine learning models and their human counterparts. “[Artificial social intelligence or ASI] is distinct and challenging compared to our physical understanding of the work; it is highly context-dependent,” said first author Lifeng Fan of the Beijing Institute for General Artificial Intelligence (BIGAI) in a statement. “Here, context could be as large as culture and common sense, or as little as two friends’ shared experience. This unique challenge prohibits standard algorithms from tackling ASI problems in real-world environments, which are frequently complex, ambiguous, dynamic, stochastic, partially observable and multi-agent.”


Why Ambient Computing May Be the Next Big Trend

Ambient computing will become an everyday reality through the widespread adoption of connected devices, the Internet of Things (IoT), and advancements in artificial intelligence, Bilay predicts. “As these technologies become more sophisticated, affordable, and seamlessly integrated into our environments, ambient computing will permeate our homes, workplaces, and public spaces.” ... Bilay says users will need to remain vigilant about data protection. He cautions that ambient computing’s reliance on interconnected systems creates dependencies that could make users susceptible to service disruptions caused by technical failures or compatibility issues. Security is another major concern. “We’ve already seen cases in which an estranged spouse uses the smart thermostat or smart lighting to harass their ex,” Loukides says. When devices are networked, attacks could occur at a larger and more devastating scale. “We’re already familiar with ransomware,” he notes. “Could somebody extort a vendor like Honeywell or Nest because they’ve taken control over all the thermostats?”


Has generative AI quietly ushered in a new era of shadow IT on steroids?

There are dozens of great studies showing the dangers that come with shadow IT. A few of the concerns include decreased control over sensitive data, an increased attack surface, risk of data loss, compliance issues, and inefficient data analysis. Yes, there are many other security, privacy, and legal issues that can surface with shadow IT. But what concerns me the most is the astonishing growth in generative AI apps -- along with how fast these apps are being adopted for a myriad of reasons. Indeed, if the internet can best be described as an accelerator for both good and evil -- which I believe is true -- generative AI is supercharging that acceleration in both directions. Many are saying that the adoption of generative AI apps is best compared to the early days of the internet, with the potential for unparalleled global growth. ... If you're questioning whether generative AI apps qualify as shadow IT, as always it depends on your situation. If the application is appropriately licensed and all the data stays within the confines of your organization's secure control, generative AI can fit neatly into your enterprise portfolio of authorized apps.


What Is a Modern Developer?

The desire to simplify one's life, automate everything, and solve problems is the key thing that drives many modern developers. If this desire sounds familiar, then you are a developer. In the near future, you may only need to think of what the code should be and then you can write it out in sentences — aka a prompt engineer. This is coming so quickly that this future could be Tuesday. The heterogeneous nature of data, data producers, applications, and services that drives everyone to be a developer also highlights the importance of developers. We need to build applications and other things since there are so many diverse applications and systems that need to be joined together to solve an entire real-world requirement. ... The number of activities a developer has to do in modern development today goes beyond just designing, creating, building, testing, and deploying applications. Often in today’s resource-constrained environments, a common additional role is to gather and translate user requirements into buildable assets. Responsibilities also include internationalization, monitoring, managing, extracting data, and more.



Quote for the day:

“When people are financially invested, they wanta return. When people are emotionally invested, they want to contribute.” -- Simon Sinek

Daily Tech Digest - July 24, 2023

A CDO Call To Action: Stop Hoarding Data—Save The Planet

Many of the implications of rampant data hoarding are reasonably well known—including the potential compliance and privacy risks associated with storing petabytes of data you know absolutely nothing about. For many companies, "don’t ask, don’t tell" seems to be the approach when it comes to ensuring compliant management of their dark data—or at the very least, ignoring dark data represents a business risk many compliance officers seem willing to take. Other implications on the cost of storing dark data, or the potential value that could be unlocked by operationalizing it, are also often discussed. For many CDOs, the motivation to store troves of data that may never get used is a form of FOMO, where the fear of being unable to support a future request for new analytical insights outweighs the cost of data storage. In these situations, the unwillingness of many CDOs to apply methods to measure the business value of data is a primary enabler of data hoarding, where the idea that "we might need it someday" is sufficient to drive millions in annual revenues for cloud service providers.


Google, Microsoft, Amazon, Meta Pledge to Make AI Safer and More Secure

Meta said it welcomed the White House agreement. Earlier this week, the company launched the second generation of its AI large language model, Llama 2, making it free and open source. "As we develop new AI models, tech companies should be transparent about how their systems work and collaborate closely across industry, government, academia and civil society," said Nick Clegg, Meta's president of global affairs. The White House agreement will "create a foundation to help ensure the promise of AI stays ahead of its risks," Brad Smith, Microsoft vice chair and president, said in a blog post. Microsoft is a partner on Meta's Llama 2. It also launched AI-powered Bing search earlier this year that makes use of ChatGPT and is bringing more and more AI tools to Microsoft 365 and its Edge browser. The agreement with the White House is part of OpenAI's "ongoing collaboration with governments, civil society organizations and others around the world to advance AI governance," said Anna Makanju, OpenAI vice president of global affairs.


UN Security Council discusses AI risks, need for ethical regulation

During the meeting, members stressed the need to establish an ethical, responsible framework for international AI governance. The UK and the US have already started to outline their position on AI regulation, while at least one arrest has occurred in China this year after the Chinese government enforced new laws relating to the technology. Malta is the only current non-permanent council member that is also an EU member state and would therefore be governed by the bloc’s AI Act, the draft of which was confirmed in a vote last month. Although AI can bring huge benefits, it also poses threats to peace, security and global stability due to its potential for misuse and its unpredictability — two essential qualities of AI systems, Clark said in comments published by the council after the meeting. “We cannot leave the development of artificial intelligence solely to private-sector actors,” he said, adding that without investment and regulation from governments, the international community runs the risk of handing over the future to a narrow set of private-sector players.


How to Choose Carbon Credits That Actually Cut Emissions

Risk is the biggest driver in business and — with trillions of dollars in annual climate-related costs and damage – the climate crisis is fast becoming a business crisis. Corporations must act now to minimize losses, illustrate meaningful climate action to shareholders and comply with fast-approaching climate regulations. Carbon credits are an important approach to scaling climate action globally and are a fast-growing strategy for delivering on corporate ESG goals. While these offsets are part of nearly every scenario that keeps global warming to 1.5 degrees Celsius, legacy carbon markets lack broad public trust: Impactful carbon solutions require clear guidelines and proven, verifiable data. ... This is an all-hands-on-deck moment. We must engage proven, reliable, and equitable methods to meet what may be the greatest threat to the future of humanity and the planet we inhabit. Carbon credits, when implemented responsibly and at scale, can be a very effective tool for humanity to use in the fight to limit the damages from climate change. 


BGP Software Vulnerabilities Overlooked in Networking Infrastructure

At the heart of the vulnerabilities was message parsing. Typically, one would expect a protocol to check that a user is authorized to send a message before processing the message. FRRouting did the reverse, parsing before verifying. So if an attacker could have spoofed or otherwise compromised a trusted BGP peer's IP address, they could have executed a denial-of-service (DoS) attack, sending malformed packets in order to render the victim unresponsive for an indefinite amount of time. ... "Originally, BGP was only used for large-scale routing — Internet service providers, Internet exchange points, things like that," dos Santos says. "But especially in the last decade, with the massive growth of data centers, BGP is also being used by organizations to do their own internal routing, simply because of the scale that has been reached," to coordinate VPNs across multiple sites or data centers, for example. More than 317,000 Internet hosts have BGP enabled, most of them concentrated in China (around 92,000) and the US (around 57,000). Just under 2,000 run FRRouting — though not all, necessarily, with BGP enabled — and only around 630 respond to malformed BGP OPEN messages.


How IT leaders are driving new revenue

CIOs who are driving new revenue are: Delivering technologies designed to meet specific business outcomes. For example, Narayaran has seen CIOs focus their teams on creating applications designed not merely on high availability and reliability but on hitting very specific business goals — such as enabling on-time deliveries to its customers. Unlocking data’s potential. Narayaran says he has also seen CIOs make big plays with their data programs, investing in the technology infrastructure needed to bring together and analyze data sets to create new services or products and drive business objectives such as improved customer retention and customer stickiness. Co-creating with their business unit colleagues. Notably, Narayaran says CIOs are approaching their business unit colleagues with such proposals. “CIOs [are saying], ‘Here’s an opportunity. We have this data, and we can make this data do this for you,’ and they then bring that to life. And if they say, ‘This is what we have and this is what we can do,’ then the business, too, can come up with new ideas.”


Data centers grapple with staffing shortages, pressure to reduce energy use

A consistent issue for the past decade, attracting and retaining new talent will continue to challenge data center leaders, according to Uptime. About two-thirds of survey respondents said they have “problems recruiting or retaining staff,” but the trend appears to be stabilizing as it hasn’t increased over last year’s data. According to the report: More than one third (35%) of respondents say their staff is being hired away, which is more than double the 2018 figure of 17%. And many believe operators are poaching from within the sector, with 22% of respondents reporting that they lost staff to their competitors. Staffing challenges are highest among operations management staff and those specializing in mechanical and electrical trades, as well as with junior level staff. “It's been challenging the data center industry for about a decade. It has been escalating in recent years. Our survey data this year suggests that it may, at least this year, not be getting worse, maybe stabilizing. And poaching is a problem of people who do get qualified applicants into jobs – they do find them hired away,” said Jacqueline Davis, research analyst at Uptime Institute.


Why API attacks are increasing and how to avoid them

First, exposing APIs to network requests significantly increases the attack surface, says Johannes Ullrich, dean of research at the SANS Technology Institute. “An attack no longer needs access to the local system but can attack the API remotely,” he says. Even worse, APIs are designed to be easy to find and use, Ullrich says. They’re “self-documenting” and are typically based on common standards. That makes them convenient for developers, but also prime targets for hackers. Since APIs are designed to help applications talk to one another, they often have access to core company data, such as financial information or transaction records. It’s not only the data itself that’s at risk. The API documentation can also give outsider insights into business logic, says Ullrich. “This insight may make finding weaknesses in the business process easier.” Then there’s the quantity issue. Companies deploying cloud-based applications no longer deploy a single monolithic application with a single access point in and out. 


Journey to Quantum Supremacy: First Steps Toward Realizing Mechanical Qubits

Research and development in this field are growing at astonishing paces to see which system or platform outruns the other. To mention a few, platforms as diverse as superconducting Josephson junctions, trapped ions, topological qubits, ultra-cold neutral atoms, or even diamond vacancies constitute the zoo of possibilities to make qubits. So far, only a handful of qubit platforms have demonstrated to have the potential for quantum computing, marking the checklist of high-fidelity controlled gates, easy qubit-qubit coupling, and good isolation from the environment, which means sufficiently long-lived coherence. ... The realization of a mechanical qubit is possible if the quantized energy levels of a resonator are not evenly spaced. The challenge is to keep the nonlinear effects big enough in the quantum regime, where the oscillator’s zero point displacement is minuscule. If this is achieved, then the system may be used as a qubit by manipulating it between the two lowest quantum levels without driving it in higher energy states.


ChatGPT Amplifies IoT, Edge Security Threats

ChatGPT and its ilk are rapidly appearing integrated or embedded in commercial and consumer IoT of all types. Many imagine AI models to be the most sophisticated security threat to date. But most of what is imagined is indeed imaginary. “Now, if an actual AI emerges, be very worried if the kill switch is very far away from humans,” says Jayendra Pathak, chief scientist at SecureIQLab. He, like others in security and AI, agree that the chances of an actual general artificial intelligence developing any time soon are still very low. But as to the latest AI sensation, ChatGPT, well that’s another kind of scare. “ChatGPT poses [insider] threats -- similar to the way rogue or ‘all-knowing employees’ pose -- to IoT. Some of the consumer IoT vulnerabilities pose the same risk as a microcontroller or microprocessor does,” Pathak says. In essence, ChatGPT’s potential threats spring from its training to be helpful and useful. Such a rosy prime directive can be very harmful, however. 



Quote for the day:

"No man can stand on top because he is put there." -- H. H. Vreeland

Daily Tech Digest - July 23, 2023

Sustainable Computing - With An Eye On The Cloud

There are two parts to sustainable goals: 1. How do cloud service providers make their data centers more sustainable?; 2. What practices can cloud service customers practice to better align with the cloud and make their workloads more sustainable? Let us first look at the question of how businesses should be planning for sustainability. How should they bake in sustainability aspects as part of their migration to the cloud? The first aspect to consider, of course, is choosing the right cloud service provider. It is essential to select a carbon-thoughtful provider based on its commitment to sustainability as well as how it plans, builds, powers, operates, and eventually retires its physical data centers. The next aspect to consider is the process of migrating services to an infrastructure-as-a-service deployment model. Organizations should carry out such migrations without re-engineering for the cloud, as this can help to drastically reduce energy and carbon emissions as compared to doing so through an on-premise data center. 


The Intersection of AI and Data Stewardship: A New Era in Data Management

In addition to improving data quality, AI can also play a crucial role in enhancing data security and privacy. With the increasing number of data breaches and growing concerns around data privacy, organizations must ensure that their data is protected from unauthorized access and misuse. AI can help organizations identify potential security risks and vulnerabilities in their data infrastructure and implement appropriate measures to safeguard their data. Furthermore, AI can assist in ensuring compliance with various data protection regulations, such as the General Data Protection Regulation (GDPR), by automating the process of identifying and managing sensitive data. Another area where AI and data stewardship intersect is in data governance. Data governance refers to the set of processes, policies, and standards that organizations use to ensure the proper management of their data assets. AI can help organizations establish and maintain robust data governance practices by automating the process of creating, updating, and enforcing data policies and rules. 


Saga Pattern With NServiceBus in C#

In its simplest form, a saga is a sequence of local transactions. Each transaction updates data within a single service, and each service publishes an event to trigger the next transaction in the saga. If any transaction fails, the saga executes compensating transactions to undo the impact of the failed transaction. The Saga Pattern is ideal for long-running, distributed transactions where each step needs to be reliable and reversible. It allows us to maintain data consistency across services without the need for distributed locks or two-phase commit protocols, which can add significant complexity and performance overhead. ... The Saga Pattern is a powerful tool in our distributed systems toolbox, allowing us to manage complex business transactions in a reliable, scalable, and maintainable way. Additionally, when we merge the Saga Pattern with the Event Sourcing Pattern, we significantly enhance traceability by constructing a comprehensive sequence of events that can be analyzed to comprehend the transaction flow in-depth.


Efficiency and sustainability in legacy data centers

A recent analyst report found a “wave of technological trends” is driving change throughout the data center sector at an unprecedented pace, with “rapidly diversifying business applications generating terabytes of data.” All that data has to go somewhere, and as hyperscale cloud providers push some of their workloads away from large, CapEx-intensive centralized hubs into Tier II and Tier III colocation markets — it’s looking like colos may be in greater demand than ever before. However, these circumstances pose a serious challenge for the colocation sector, as “the resulting workloads have exploded onto legacy data center infrastructures”, many of which may be “ill-equipped to handle them.” Now, the colocation market finds itself caught between two conflicting macroeconomic forces. On one hand, the growth in demand puts greater pressure on operators in Tier II and III markets to build more facilities, faster, to accommodate larger and more complex workloads; on the other, the existential need to reduce carbon emissions and slash energy consumption is vital.


A quantum computer that can’t be simulated on classical hardware could be a reality next year

The current-generation machines are still very much in the noisy era of quantum computing, Ilyas Khan, who founded Cambridge Quantum out of the University of Cambridge in 2014 and now works as chief product officer, told Tech Monitor that we’re moving into the “mid-stage NISQ” where the machines are still noisy but we’re seeing signs of logical qubits and utility. Thanks to error correction, detection and mitigation techniques, even on noisy error-prone qubits, many companies have been able to produce usable outcomes. But at this stage, the structure and performance of the quantum circuits could still be simulated using classical hardware. That will change next year, says Khan. “We think it’s important for quantum computers to be useful in real-life problem solving,” he says. “Our current system model, H2, has 32 qubits in its current instantiation, all to all connected with mid-circuit measurement.”


Protecting your business through test automation

The inadequate pre-launch testing forces teams to then scramble post-launch to fix faulty software applications with renewed urgency, with the added pressure of managing the potential loss of revenue and damaged brand reputation caused by the defect. When the faulty software reaches end users, dissatisfied customers are a problem that could have far longer-reaching effects as users pass on their negative experiences to others. The negative feedback could also prevent potential new customers from ever trying the software in the first place. So why is software not being tested properly? Changing customer behaviours in the financial services sector, as well as increased competition from digital-native fintech start-ups, have led many organisations to invest in a huge amount of digital transformation in recent years. With companies coming under more pressure than ever to respond to market demands and user experience trends through increasingly frequent software releases, the sheer volume of software needing testing has skyrocketed, placing a further burden on resources already stretched to breaking point.


Implementing zero trust with the Internet of Things (IoT)

There’s a strongly held view that it simply isn’t possible to trust any IoT device, even if it’s equipped with automatic security updating. “As a former CIO, my guidance is that preparation is the best defense,” Archundia tells ITPro. IoT devices are often just too much of a risk; they’re too much of a soft entry point into the organization to overlook them. It’s best to assume each device is a hole in an enterprise’s defenses. Perhaps each device won’t be a hole at all times, but some may be for at least some of the times. So long as the hole isn’t plugged, it can be found and exploited. That’s actually fine in a zero trust environment, because it assumes every single act, by a human or a device, could be malicious. ... “Because zero trust focuses on continuously verifying and placing security as close to each asset as possible, a cyber attack need not have far-reaching consequences in the organization,” he says. “By relying on techniques such as secured zones, the organization can effectively limit the blast radius of an attack, ensuring that a successful attack will have limited benefits for the threat agent.”


US Data Privacy Relationship Status: It’s Complicated

The American Data Privacy and Protection Act (ADPPA) is a bill that if passed would become the first set of federal privacy regulations that would supersede state laws. While it passed a House of Representatives commerce committee vote by a 53-2 margin in July 2022, the bill is still waiting on a full House vote and then a Senate vote. In the US, 10 states have enacted comprehensive privacy laws, including California, Colorado, Connecticut, Indiana, Iowa, Montana, Tennessee, Texas, Utah, and Virginia. More than a dozen other states have proposed bills in various states of activity. The absence of an overarching federal law means companies must pick and choose based on where they happen to be doing business. Some businesses opt to start with the most stringent law and model their own data privacy standards accordingly. The current global standard for privacy is Europe’s 2018 General Data Protection Regulation (GDPR) and has become the model for other data privacy proposals. Since many large US companies do business globally, they are very familiar with GDPR. 


KillNet DDoS Attacks Further Moscow's Psychological Agenda

Mandiant's assessment of the 500 DDoS attacks launched by KillNet and associated groups from Jan. 1 through June 20 offers further evidence that the collective isn't some grassroots assembly of independent, patriotic hackers. "KillNet's targeting has consistently aligned with established and emerging Russian geopolitical priorities, which suggests that at least part of the influence component of this hacktivist activity is intended to directly promote Russia's interests within perceived adversary nations vis-a-vis the invasion of Ukraine," Mandiant said. Researchers said KillNet and its affiliates often attack technology, social media and transportation firms, as well as NATO. ... To hear KillNet's recounting of its attacks via its Telegram channel, these hacktivists are nothing short of devastating. The same goes for other past and present members of the KillNet collective, including KillMilk, Tesla Botnet, Anonymous Russia and Zarya. Recent attacks by Anonymous Sudan have involved paid cloud infrastructure and had a greater impact, although it's unclear if this will become the norm.


Agile vs. Waterfall: Choosing the Right Project Methodology

Choosing the right project management methodology lays the foundation for effective planning, collaboration, and delivery. Failure to select the appropriate methodology can lead to many challenges and setbacks that can hinder project progress and ultimately impact overall success. Let's delve into why it's crucial to choose the right project management methodology and explore in-depth what can go wrong if an unsuitable methodology is employed. ... The right methodology enables effective resource allocation and utilization. Projects require a myriad of resources, including human, financial, and technological. If you select an inappropriate methodology, you can experience inefficient resource management, causing budget overruns, underutilization of skills, and time delays. For instance, an Agile methodology that relies heavily on frequent collaboration and iterative development may not be suitable for projects with limited resources and a hierarchical team structure.



Quote for the day:

"People leave companies for two reasons. One, they don't feel appreciated. And two, they don't get along with their boss." -- Adam Bryant

Daily Tech Digest - July 22, 2023

All-In-One Data Fabrics Knocking on the Lakehouse Door

The fact IBM, HPE, and Microsoft made such similar data fabric and lakehouse announcements indicate there is strong market demand, Patel says. But it’s also partly a result of the evolution of data architecture and usage patterns, he says. “I think there are probably some large enterprises that decide, listen, I can’t do this anymore. You need to go and fix this. I need you to do this,” he says. “But there’s also some level of just where we’re going…We were always going to be in a position where governance and security and all of those types of things just become more and more important and more and more intertwined into what we do on a daily basis. So it doesn’t surprise me that some of these things are starting to evolve.” While some organizations still see value in choosing the best-of-breed products in every category that makes up the data fabric, many will gladly give up having the latest, greatest feature in one particular area in exchange for having a whole data fabric they can move into and be productive from day one.


Shift Left With DAST: Dynamic Testing in the CI/CD Pipeline

The integration of DAST in the early stages of development is crucial for several reasons. First, by conducting dynamic security testing from the onset, teams can identify vulnerabilities earlier, making them easier and less costly to fix. This proactive approach helps to prevent security issues from becoming ingrained in the code, which can lead to significant problems down the line. Second, early integration of DAST encourages a security-focused mindset from the beginning of the project, promoting a culture of security within the team. This cultural shift is crucial in today’s cybersecurity climate, where threats are increasingly sophisticated, and the stakes are higher than ever. DAST doesn’t replace other testing methods; rather, it complements them. By combining these methods, teams can achieve a more comprehensive view of their application’s security. In a shift left approach, this combination of testing methods can be very powerful. By conducting these tests early and often, teams can ensure that both the external and internal aspects of their application are secure. This layered approach to security testing can help to catch any vulnerabilities that might otherwise slip through the cracks.


First known open-source software attacks on banking sector could kickstart long-running trend

In the first attack detailed by Checkmarx, which occurred on 5 April and 7 April, a threat actor leveraged the NPM platform to upload packages that contained a preinstall script that executed its objective upon installation. To appear more credible, the attacker created a spoofed LinkedIn profile page of someone posing as an employee of the victim bank. Researchers originally thought this may have been linked to legitimate penetration testing services commissioned by the bank, but the bank revealed that to not be the case and that it was unaware of the LinkedIn activity. The attack itself was modeled on a multi-stage approach which began with running a script to identify the victim’s operating system – Windows, Linux, or macOS. Once identified, the script then decoded the relevant encrypted files in the NPM package which then downloaded a second-stage payload. Checkmarx said that the Linux-specific encrypted file was not flagged as malicious by online virus scanner VirusTotal, allowing the attacker to “maintain a covert presence on the Linux systems” and increase its chances of success.


From data warehouse to data fabric: the evolution of data architecture

By introducing domain‑oriented data ownership, domain teams become accountable for their data and products, improving data quality and governance. Traditional data lakes often encounter challenges related to scalability and performance when handling large volumes of data. However, data mesh architecture solves these scalability issues through its decentralized and self‑serve data infrastructure. With each domain having the autonomy to choose the technologies and tools that best suits their needs, data mesh allows teams to scale their data storage and processing systems independently. ... Data Fabric is an integrated data architecture that is adaptive, flexible, and secure. It is an architectural approach and technology framework that addresses data lake challenges by providing a unified and integrated view of data across various sources. Data Fabric allows faster and more efficient access to data by extracting the technological complexities involved in data integration, transformation, and movement so that anybody can use it.


What Is the Role of Software Architect in an Agile World?

It has become evident that there is a gap between the architecture team and those who interact with the application on a daily basis. Even in the context of the microservice architecture, failing to adhere to best practices can result in a tangled mess that may force a return to monolithic structures, as we have seen with Amazon Web Services. I believe that it is necessary to shift architecture left and provide architects with better tools to proactively identify architecture drift and technical debt buildup, injecting architectural considerations into the feature backlog. With few tools to understand the architecture or identify the architecture drift, the role of the architect has become a topic of extensive discussion. Should every developer be responsible for architecture? Most companies have an architect who sets standards, goals, and plans. However, this high-level role in a highly complex and very detailed software project will often become detached from the day-to-day reality of the development process. 


Rapid growth without the risk

The case for legacy modernization should today be clear: technical debt is like a black hole, sucking up an organization’s time and resources, preventing it from developing the capabilities needed to evolve and adapt to drive growth. But while legacy systems can limit and inhibit business growth, from large-scale disruption to subtle but long-term stagnation, changing them doesn’t have to be a painful process of “rip-and-replace.” In fact, rather than changing everything only to change nothing, an effective program enacts change in people, processes and technology incrementally. It focuses on those areas that will make the biggest impact and drive the most value, making change manageable in the short term yet substantial in its effect on an organization's future success and sustainable in the long term. In an era where executives often find themselves in FOMU (fear of messing up) mode, they would be wise to focus on those areas of legacy modernization that will make the biggest impact and drive the most value, making change manageable in the short term yet substantial in its effect on an organization’s future success.


Data Fabric: How to Architect Your Next-Generation Data Management

The data fabric encompasses a broader concept that goes beyond standalone solutions such as data virtualization. Rather, the architectural approach of a data fabric integrates multiple data management capabilities into a unified framework. The data fabric is an emerging data management architecture that provides a net that is cast to stitch together multiple heterogeneous data sources and types through automated data pipelines. ... For business teams, a data fabric empowers nontechnical users to easily discover, access, and share the data they need to perform everyday tasks. It also bridges the gap between data and business teams by including subject matter experts in the creation of data products. ... Implementing an efficient data fabric architecture is not accomplished with a single tool. Rather, it incorporates a variety of technology components such as data integration, data catalog, data curation, metadata analysis, and augmented data orchestration. Working together, these components deliver agile and consistent data integration capabilities across a variety of endpoints throughout hybrid and multicloud environments.


Data Lineage Tools: An Overview

Modern data lineage tools have evolved to meet the needs of organizations that handle large volumes of data. These tools provide a comprehensive view of the journey of data from its source to its destination, including all transformations and processing steps along the way. They enable organizations to trace data back to its origins, identify any changes made along the way, and ensure compliance with regulatory requirements. One key feature of modern lineage tools is their ability to automatically capture and track metadata across multiple systems and platforms. This capability removes the need for manual, time-consuming documentation. Another important aspect of modern data lineage tools is their integration with other technologies such as metadata management systems, Data Governance platforms, and business intelligence solutions. This enables organizations to create a unified view of their data landscape and make informed decisions based on accurate, up-to-date information.


The Impact of AI Data Lakes on Data Governance and Security

One of the primary concerns with AI data lakes is the potential for data silos to emerge. Data silos occur when data is stored in separate repositories or systems that are not connected or integrated with one another. This can lead to a lack of visibility and control over the data, making it difficult for organizations to enforce data governance policies and ensure data security. To mitigate this risk, organizations must implement robust data integration and management solutions that enable them to maintain a comprehensive view of their data landscape and ensure that data is consistently and accurately shared across systems. Another challenge associated with AI data lakes is the need to maintain data quality and integrity. As data is ingested into the data lake from various sources, it is essential to ensure that it is accurate, complete, and consistent. Poor data quality can lead to inaccurate insights and decision-making, as well as increased security risks. 


AppSec Consolidation for Developers: Why You Should Care

Complicated and messy AppSec programs are yielding a three-fold problem: unquantifiable or unknowable levels of risk for the organization, ineffective resource management and excessive complexity. This combined effect leaves enterprises with a fragmented picture of total risk and little useful information to help them strengthen their security posture. ... An increase in the number of security tools leads to an increase in the number of security tests, which in turn translates to an increase in the number of results. This creates a vicious cycle that adds complexity to the AppSec environment that is both unnecessary and avoidable. Most of the time, these results are stored in their respective point tools. As a result, developers frequently receive duplicate issues as well as remediation guidance that is ineffective or lacking context, causing them to waste critical time and resources. Without consolidated and actionable outcomes, it is impossible to avoid duplication of findings and remediation actions.



Quote for the day:

"There is no substitute for knowledge." -- W. Edwards Deming