Daily Tech Digest - July 27, 2023

'FraudGPT' Malicious Chatbot Now for Sale on Dark Web

Both WormGPT and FraudGPT can help attackers use AI to their advantage when crafting phishing campaigns, generating messages aimed at pressuring victims into falling for business email compromise (BEC), and other email-based scams, for starters. FraudGPT also can help threat actors do a slew of other bad things, such as: writing malicious code; creating undetectable malware; finding non-VBV bins; creating phishing pages; building hacking tools; finding hacking groups, sites, and markets; writing scam pages and letters; finding leaks and vulnerabilities; and learning to code or hack. Even so, it does appear that helping attackers create convincing phishing campaigns is still one of the main use cases for a tool like FraudGPT, according to Netenrich. ... As phishing remains one of the primary ways that cyberattackers gain initial entry onto an enterprise system to conduct further malicious activity, it's essential to implement conventional security protections against it. These defenses can still detect AI-enabled phishing, and, more importantly, subsequent actions by the threat actor.


Key factors for effective security automation

A few factors generally drive the willingness to automate security. One factor is if the risk of not automating exceeds the risk of an automation going wrong: If you conduct business in a high-risk environment, the potential for damage when not automating can be higher than the risk of triggering an automated response based on a false positive. Financial fraud is a good example, where banks routinely automatically block transactions they find to be suspicious, because a manual process would be too slow. Another factor is when the damage potential of an automation going wrong is low. For example, there is no potential damage when trying to fetch a non-existent file from a remote system for forensic analysis. But what really matters most is how reliable automation is. For example, many threats actors today use living-off-the-land techniques, such as using common and benign system utilities like PowerShell. From a detection perspective, there are no uniquely identifiable characteristics like a file hash, or a malicious binary to inspect in a sandbox. 


API-First Development: Architecting Applications with Intention

More traditionally, tech companies often started with a particular user experience in mind when setting out to develop a product. The API was then developed in a more or less reactive way to transfer all the necessary data required to power that experience. While this approach gets you out the door fast, it isn’t very long before you probably need to go back inside and rethink things. Without an API-first approach, you feel like you’re moving really fast, but it’s possible that you’re just running from the front door to your driveway and back again without even starting the car. API-first development flips this paradigm by treating the API as the foundation for the entire software system. Let’s face it, you are probably going to want to power more than one developer, maybe even several different teams, all possibly even working on multiple applications, and maybe there will even be an unknown number of third-party developers. Under these fast-paced and highly distributed conditions, your API cannot be an afterthought.


What We Can Learn from Australia’s 2023-2030 Cybersecurity Strategy

One of the challenges facing enterprises in Australia today is a lack of clarity in terms of cybersecurity obligations, both from an operational perspective and as organizational directors. Though there are a range of implicit cybersecurity obligations designated to Australian enterprises and nongovernment entities, it is the need of the hour to have more explicitly stated obligations to increase national cyberresilience.There are also opportunities to simplify and streamline existing regulatory frameworks to ensure easy adoption of those frameworks and cybersecurity obligations. ... Another important aspect of the upcoming Australian Cybersecurity Strategy is to strengthen international cyberleaders to enable them to seize opportunities and address challenges presented by the shifting cyberenvironment. To keep up with new and emerging technologies, this cybersecurity strategy aims to take tangible steps to shape global thinking about cybersecurity.


Is your data center ready for generative AI?

Generative AI applications create significant demand for computing power in two phases: training the large language models (LLMs) that form the core of generate AI systems, and then operating the application with these trained LLMs, says Raul Martynek, CEO of data center operator DataBank. “Training the LLMs requires dense computing in the form of neural networks, where billions of language or image examples are fed into a system of neural networks and repeatedly refined until the system ‘recognizes’ them as well as a human being would,” Martynek says. Neural networks require tremendously dense high-performance computing (HPC) clusters of GPU processors running continuously for months, or even years at a time, Martynek says. “They are more efficiently run on dedicated infrastructure that can be located close to the proprietary data sets used for training,” he says. The second phase is the “inference process” or the use of these applications to actually make inquiries and return data results.


Siloed data: the mountain of lost potential

Given AI’s growing capabilities for handling customer service are only made possible through data, the risk of not breaking down internal data siloes is sizeable, not just in terms of missing opportunities. Companies could also see a decline in the speed and quality of their customer service as contact centre agents need to spend longer navigating multiple platforms and dashboards to find the information needed to help answer customers’ queries. Eliminating data siloes requires educating everyone in the business to understand the necessity of sharing data through an open culture and encouraging the data sides of operations to co-ordinate efforts, align visions and achieve goals. The synchronisation of business operations with customer experience, alongside adopting a data-driven approach, can produce significant benefits such as increased customer spending. ... Data, working for and with AI, must be placed at the centre of the business model. This means getting board buy-in to establish a data factory run by qualified data engineers and analysts who are capable of driving the collection and use of data within the organisation.


An Overview of Data Governance Frameworks

Data governance frameworks are built on four key pillars that ensure the effective management and use of data across an organization. These pillars ensure data is accurate, can be effectively combined from different sources, is protected and used in compliance with laws and regulations, and is stored and managed in a way that meets the needs of the organization. ... Furthermore, a lack of governance can lead to confusion and duplication of effort, as different departments or individual users try to manage data with their own methods. A well-designed data governance framework ensures all users understand the rules for managing data and that there is a clear process for making changes or additions to the data. It unifies teams, improving communication between different teams and allowing different departments to share best practices. In addition, a data governance framework ensures compliance with laws and regulations. From HIPAA to GDPR, there are a multitude of data privacy laws and regulations all over the world. Running afoul of these legal provisions is expensive in terms of fines and settlement costs and can damage an organization’s reputation.


Governance — the unsung hero of ESG

What's interesting is that for the most part, they're all at different stages of transformation and managing the risks of transformation. A board has four responsibilities, observing performance, approving, and providing resources to fund the strategy, hiring and developing the succession plan, and risk management. Depending on where you are in a normal cycle of a business or the market, the board is involved in these 4. Also, I take lessons that I've learned at other boards and apply them possibly to Baker Hughes' situation and vice versa: take some of the lessons that I'm learning and the things that I'm hearing in the Baker Hughes situation — unattributed, of course — and bring it into other boards. Sometimes there's a nice element of sharing. As you know, Baker Hughes has a very strong Board and I am a good student at taking down good and thoughtful questions from board members and bringing that to other company boards, if appropriate.


Why whistleblowers in cybersecurity are important and need support

“Governments should have a whistleblower program with clear instructions on how to disclose information, then offer the resources to enable procedures to encourage employees to come forward and guarantee a safe reporting environment,” she says. Secondly, nations need to upgrade their legislation to include strong anti-retaliation protection against tech workers, making it unlawful for various entities to engage in reprisal. This includes job-related pressure, harassment, doxing, blacklisting, and retaliatory investigations. ... To further increase chances, employees can be offered regular training sessions in which they are informed about the importance of coming forward on cybersecurity issues, the ways to report wrongdoing, and the protection mechanisms they could access. Moreover, leadership should explain that it has zero tolerance for retaliation. “Swift action should be taken if any instances of retaliation come to light,” according to Empower Oversight. The message leadership should convey is that issues are taken seriously and that C-level executives are open for conversation if the situation requires such an action.


Cloud Optimization: Practical Steps to Lower Your Bills

Optimization is always an iterative process, requiring continual adjustment as time goes on. However, there are many quick wins and strategies that you can implement today to refine your cloud footprint:Unused virtual machines (VMs), storage and bandwidth can lead to unnecessary expenses. Conducting periodic evaluations of your cloud usage and identifying such underutilized resources can effectively minimize costs. Check your cloud console now. You might just find a couple of VMs sitting there idle, accidentally left behind after the work was done. Temporary backup resources, such as VMs and storage, are frequently used for storing data and application backups. Automate the deletion process of these temporary backup resources to save money. Selecting the appropriate tier entails choosing the cloud resource that aligns best with your requirements. For instance, if you anticipate a high volume of traffic and demand, opting for a high-end VM would be suitable. Conversely, for smaller projects, a lower-end VM might suffice. 



Quote for the day:

"Your job gives you authority. Your behavior gives you respect." -- Irwin Federman

Daily Tech Digest - July 26, 2023

How digital humans can make healthcare technology more patient-centric

Like humans, digital humans have anatomy. Several technologies are used to create digital humans. The Representation: The “face” of the digital entity can be created in likeness to a real or caricature of a human. The quality of this representation is critical to a successfully designed digital human. Natural Language Processing (NLP) or Natural Language Understanding (NLU): NLP/NLU ensures that the digital human can properly interpret information, such as speech detection, speech-to-text translation, and language recognition and detection. Advanced forms of NLP/NLU will include sign language as well. Cognitive Services: Cognitive services are used for creating personalized communication including language translation, speech synthesis, voice customization, speech prosody and pitch, nomenclature and specialized pronunciation. Artificial Intelligence: The AI layer–whether generative, extractive or other forms–provides contextual conversation response, context recognition and for generative AI, content creation.


CISO to BISO – What's your next role?

The role of a BISO has emerged over the past decade, as organisations recognise the need for dedicated security roles and skills within specific business units or departments. While it is challenging to pinpoint an exact date when the role of BISO became established across all industries, it can be traced back to the increasing emphasis on information security, the evolving nature of cybersecurity threats and the increasingly complex technical infrastructures in use. As businesses have become more digital, data-centric, and interconnected, the complexity and diversity of security risks have grown exponentially with it. Traditional approaches to information security, where the responsibility solely resides with the IT department or a centralised security team, have proved inadequate to address the unique security challenges faced by businesses today. ... When implementing information security in larger organisations, we would look for security champions within operational or support functions. People who showed some kind of interest in the world of cybersecurity usually resulted in them being offered a support role on a voluntary basis. 


Top cybersecurity tools aimed at protecting executives

A recent Ponemon report, sponsored by BlackCloak, revealed that 42% of respondents indicated that key executives and family members have already experienced at least one cyberattack. While it's likely that cybercriminals will target executives and the digital assets they have access to, organizations are not responding with suitable strategies, budgets, and staff, the report found. Just over half (58%) of respondents reported that the prevention of threats against executives and their digital assets is not covered in their cyber, IT and physical security strategies and budget. The lack of attention is demonstrated with only 38% of respondents reporting a dedicated team to prevent or respond to cyber or privacy attacks against executives and their families. The best practice to do this well would be to protect the executive as well as their family, inner circle, and associates with a broad range of measures, Agency's Executive Digital Protection report noted. The solutions need to balance breadth, value, privacy, and specialization, it said. 


How WebAssembly will transform edge computing

As the next major technical abstraction, Wasm aspires to address the common complexity inherent in the management of the day-to-day dependencies embedded into every application. It addresses the cost of operating applications that are distributed horizontally, across clouds and edges, to meet stringent performance and reliability requirements. Wasm’s tiny size and secure sandbox mean it can be safely executed everywhere. With a cold start time in the range of 5 to 50 microseconds, Wasm effectively solves the cold start problem. It is both compatible with Kubernetes while not being dependent upon it. Its diminutive size means it can be scaled to a significantly higher density than containers and, in many cases, it can even be performantly executed on demand with each invocation. But just how much smaller is a Wasm module compared to a micro-K8s containerized application? An optimized Wasm module is typically around 20 KB to 30 KB in size. When compared to a Kubernetes container, the Wasm compute units we want to distribute are several orders of magnitude smaller. 


Data Governance Trends and Best Practices for Storage Environments

The more intelligent the data layer is, the more value the data can provide. More valuable data makes the role of data governance stronger within the organization. Active archive solutions can serve as a framework for data governance by including an intelligent data management software layer that automatically places data where it belongs and optimizes its location based on cost, performance, and user access needs. “Data governance is the process of managing the availability, usability, integrity and security of enterprise data,” said Rich Gadomski, head of tape evangelism at FUJIFILM Recording Media U.S.A. and co-chair of the Active Archive Alliance. ... Supporting active archives with optical disk storage technologies can provide long-term data preservation. These technologies are designed to withstand environmental factors like temperature, humidity, and magnetic interference, ensuring the integrity and longevity of archived data. With a typical lifespan of hundreds of years or more, optical disks are well-suited for archival purposes.


Dr. Pankaj Setia on the challenges that will redefine CIOs’ careers

First, a risk-averse culture may be addressed through a two-pronged approach. First, CIOs must champion training and engagement of employees, to create a digital mindset and enhance understanding of the digital transformation being undertaken. It is imperative that the employees are excited about the transformation. ... A second step for CIOs is to work toward getting buy-in from top management. For CIOs to get desired results, the board and top management team (TMT) must actively champion digital transformation initiatives. Many examples from the corporate world underline the role of top leadership in engaging and motivating employee teams. Second, overcoming the barriers due to siloed strategy is a complex endeavor. It is not always easy to overcome these, as professional management relies on specialization in a functional domain (e.g., marketing, finance, human resources, etc.). However, because digital transformation inherently spans functional domains, siloed strategies — that emphasize super specialization — are not optimal. Therefore, CIOs should look to create cross-functional teams.


Risks and Strategies to Use Generative AI in Software Development

Among the risks of using AI in software development is the potential that it regurgitates bad code that has been making the rounds in the open-source world. “There’s bad code is being copied and used everywhere,” says Muddu Sudhakar, CEO and co-founder of Aisera, developer of a generative AI platform for enterprise. “That’s a big risk.” The risk is not simply poorly written code being repeated -- the bad code might be put into play by bad actors looking to introduce vulnerabilities they may exploit at a later date. Sudhakar says organizations that draw upon generative AI, and other open-source resources, should put controls in place to spot such risks if they intend to make AI part of the development equation. “It’s in their interest because all it takes is one bad code,” he says, pointing to the long-running hacking campaign behind the Solar Winds data breach. The skyrocketing appeal of AI for development seems to outweigh concerns about the potential for data to leak or for other issues to occur. “It’s so useful that it’s worth actually being aware of the risks and doing it anyway,” says Babak Hodjat, CTO of AI and head of Cognizant AI Labs.


Supply Chain, Open Source Pose Major Challenge to AI Systems

Bengio said one big risk area around AI systems is open-source technology, which "opens the door" to bad actors. Adversaries can take advantage of open-source technology without huge amounts of compute or strong expertise in cybersecurity, according to Bengio. He urged the federal government to establish a definition of what constitutes open-source technology - even if it changes over time - and use it to ensure future open-source releases for AI systems are vetted for potential misuse before being deployed. "Open source is great for scientific progress," Bengio said. "But if nuclear bombs were software, would you allow open-source nuclear bombs?" Bengio said the United States must ensure that spending on AI safety is equivalent to how much the private sector is spending on new AI capabilities, either through incentives to businesses or direct investment in nonprofit organizations. The safety investments should address the hardware used in AI systems as well as cybersecurity controls necessary to safeguard the software that powers AI systems.


Zero-Day Vulnerabilities Discovered in Global Emergency Services Communications Protocol

In a demonstration video of CVE-2022-24401, researchers showed that an attacker would be able to capture the encrypted message by targeting a radio to which the message was being sent. Midnight Blue founding partner Wouter Bokslag says that in none of the circumstances for this vulnerability do you get your hands on a key: "The only thing is you're getting is the key stream, which you can use to decrypt, arbitrary frames, or arbitrary messages that go over the network." A second demonstration video of CVE-2022-24402 reveals that there is a backdoor in the TEA1 algorithm that affects networks relying on TEA1 for confidentiality and integrity. It was also discovered that the TEA1 algorithm uses an 80-bit key that an attacker could do a brute-force attack on, and listen in to the communications undetected. Bokslag admits that using the term backdoor is strong, but it is justified in this instance. "As you feed an 80 bits key to TEA1, that flows through a reduction step and which leaves it with only 32 bits of key material, and it will carry on doing the decryption with only those 32 bits," he says.


Enterprises should layer-up security to avoid legal repercussions

There are two competing temptations in the technology landscape that the seasoned security professional must navigate. The first is the temptation to totally trust the power of the tool. An overly optimistic reliance on vendor tools and promises can fail to identify security issues if the tools are not properly implemented and operationalized in your environment. A shiny SIEM tool, for example, is useless unless you have clearly documented response actions to take for each alert, as well as fully trained personnel to handle investigations. The second temptation, which I believe is more prevalent within tech and SaaS companies, is to trust no tool except for in-house tech. The thought process goes as follows: “Since we have a solid development team, and we want to keep a bench of developers for any eventuality, we need to keep their skills sharp, so we might as well build our own tools.” It’s a sound argument — up to a point. However, it may be a bit arrogant to believe your company has the expertise to develop the best-in-class SIEM solutions, ticketing systems, SAST tools, and what have you.



Quote for the day:

"If you don't demonstrate leadership character, your skills and your results will be discounted, if not dismissed." -- Mark Miller

Daily Tech Digest - July 25, 2023

The real risk of AI in network operations

One who tried generative AI technology on their own historical network data said that it suggested a configuration change that, had it been made, would have broken the entire network. “The results were wrong a quarter of the time, and very wrong maybe an eighth of the time,” the operations manager said. “I can’t act on that kind of accuracy.” ... That raises my second point about a lack of detail on how AI reached a conclusion. I’ve had generative AI give me wrong answers that I recognized because they were illogical, but suppose you didn’t have a benchmark result to test against? If you understood how the conclusion was reached, you’d have a chance of picking out a problem. Users told me that this would be essential if they were to consider generative AI a useful tool. They don’t think, nor do I, that the current generative AI state of the art is there yet. What about the other, non-generative, AI models? There are well over two dozen operations toolkits out there that claim AI or AI/ML capability. Users are more positive on these, largely because they have a limited scope of action and leave a trail of decision-making steps that can be checked quickly.


Exploring an Agilist's Story of Perseverance

“There are a few things in technology that help people with MS because they have the same problem with speech, but they’re not really effective for me,” because you still have to go back and edit everything. If you are living with ataxia, “there is a group called the National Ataxia Foundation. It is a great support group; you don’t feel like you are going through this alone. They post things about technology and tools you can use,” Apuroopa said. She also recommends utilizing your HR resources if you have any need for accommodation. An accommodation request form may be the right way to access technology or request an adjustment or change in your work environment or duties based on a medical condition. Apuroopa’s employer offers a work-from-home option. “The remote environment adds complexity,” she said, because not everyone is willing to turn their cameras on for various reasons, and you end up missing that facial connection and body language, but she’s also thankful for the option to stay home.


A critical cybersecurity backup plan that too many companies are ignoring

With a departure of a CISO, there is a loss of valuable institutional knowledge, which can impede an organization’s ability to adapt to rapidly evolving cyber threats, said Daniel Soo, risk and financial advisory principal in cyber and strategic risk at consulting firm Deloitte. “The lack of a successor could disrupt business-as-usual cybersecurity operations, resulting in delays, gaps in critical cyber risk management activities, and hindered cyber incident response and decision-making,” Soo said. In addition, CISO succession planning is key to ensuring that an organization has the right person at the right time to help drive the organization’s cyber objectives, Soo said. ... CISO succession planning should also involve anticipating future security requirements by considering the evolving nature of the business and technology landscape. “CISOs should analyze the security implications of these trends and develop policies, technologies, and skills to address future needs,” he said. “Implementing a training program can help ensure that employees are equipped with the necessary skills to tackle upcoming security challenges.”


Bridging the cybersecurity skills gap through cyber range training

Cyber ranges take traditional cyber training and turn it into real-life, experiential learning so learners can actually apply their knowledge and skills and gain real experience using a simulation method. SOC analysts, who are the last line of defense, need to continually engage in these simulations to strengthen their capabilities and create “muscle memory.” An ongoing cyber range training program with real-life attacks enhances their preparedness as individuals and as a cohesive team through immersive experiences. One thing to note is that not all cyber ranges are equal to each other. They can vary in terms of their purpose, complexity, and available features, tools, and technology. To ensure your team is getting the most effective training, it’s critical to use a dynamic range with live-fire attacks that the whole team can participate in together, versus more of a directed lab environment or individual exercises that team members do in parallel. 


Why cyber security should be part of your ESG strategy

In fact, the investment community has been singling out cyber security as one of the major risks that ESG programmes will need to address due to the potential financial losses, reputational damage and business continuity risks posed by a growing number of cyber attacks and data breaches. Investment firm Nomura already takes into account an investee firm’s cyber security performance in its credit ESG scoring model, while KPMG noted in its report that cyber security is not only applicable to the governance aspects of ESG, but also has social and environmental implications. ... “That trust you want to build from a social standpoint comes from sound cyber security practices, so you can tell customers you’re taking the right steps to protect their identity and financial information,” he added. But even after organisations have identified aspects of their businesses that are at risk, building up their risk profile remains challenging as they are often unaware of what technology assets they have, coupled with the lack of efforts to assess technical risks, Wenzler said.


Boost your tech ROI with Engineering Effectiveness

Learnings from numerous agile, DevOps, and platform transformation projects have shown that the productivity of engineering teams in most organizations is around 30 percent of their total potential. Therefore, a whopping 70 percent improvement is possible, even necessary if you want to keep up with digital-native competitors. You can achieve this by investing in both technology and the development teams themselves. Create an environment equipped with the right platforms, methodologies, and workplace culture that makes teams more productive and helps them collaborate more efficiently. It's also vital to give developers the opportunity and resources to keep their skills up to date. ... The path to modernization is not only about allocating more resources, but fundamentally about transforming business processes and culture. Talent is better utilized when outdated and inefficient workflows are revised. A critical look at the organization, involving senior management, is essential to uncover all bottlenecks. Changing traditional work and thought patterns can be challenging. In such cases, external assistance coupled with tried-and-tested frameworks and tools can be of help. 


Social Intelligence Is the Next Big Step for AI

When it comes to being able to decipher nonverbal cues like body language or facial expressions, AI still lacks many of the social skills that many of us humans take for granted. To help AI develop those social skills, new work from Chinese researchers suggests that a multidisciplinary approach will be needed — such as adapting what we know about cognitive science, and using computational modeling would help us better identify the disparities between the social intelligence of machine learning models and their human counterparts. “[Artificial social intelligence or ASI] is distinct and challenging compared to our physical understanding of the work; it is highly context-dependent,” said first author Lifeng Fan of the Beijing Institute for General Artificial Intelligence (BIGAI) in a statement. “Here, context could be as large as culture and common sense, or as little as two friends’ shared experience. This unique challenge prohibits standard algorithms from tackling ASI problems in real-world environments, which are frequently complex, ambiguous, dynamic, stochastic, partially observable and multi-agent.”


Why Ambient Computing May Be the Next Big Trend

Ambient computing will become an everyday reality through the widespread adoption of connected devices, the Internet of Things (IoT), and advancements in artificial intelligence, Bilay predicts. “As these technologies become more sophisticated, affordable, and seamlessly integrated into our environments, ambient computing will permeate our homes, workplaces, and public spaces.” ... Bilay says users will need to remain vigilant about data protection. He cautions that ambient computing’s reliance on interconnected systems creates dependencies that could make users susceptible to service disruptions caused by technical failures or compatibility issues. Security is another major concern. “We’ve already seen cases in which an estranged spouse uses the smart thermostat or smart lighting to harass their ex,” Loukides says. When devices are networked, attacks could occur at a larger and more devastating scale. “We’re already familiar with ransomware,” he notes. “Could somebody extort a vendor like Honeywell or Nest because they’ve taken control over all the thermostats?”


Has generative AI quietly ushered in a new era of shadow IT on steroids?

There are dozens of great studies showing the dangers that come with shadow IT. A few of the concerns include decreased control over sensitive data, an increased attack surface, risk of data loss, compliance issues, and inefficient data analysis. Yes, there are many other security, privacy, and legal issues that can surface with shadow IT. But what concerns me the most is the astonishing growth in generative AI apps -- along with how fast these apps are being adopted for a myriad of reasons. Indeed, if the internet can best be described as an accelerator for both good and evil -- which I believe is true -- generative AI is supercharging that acceleration in both directions. Many are saying that the adoption of generative AI apps is best compared to the early days of the internet, with the potential for unparalleled global growth. ... If you're questioning whether generative AI apps qualify as shadow IT, as always it depends on your situation. If the application is appropriately licensed and all the data stays within the confines of your organization's secure control, generative AI can fit neatly into your enterprise portfolio of authorized apps.


What Is a Modern Developer?

The desire to simplify one's life, automate everything, and solve problems is the key thing that drives many modern developers. If this desire sounds familiar, then you are a developer. In the near future, you may only need to think of what the code should be and then you can write it out in sentences — aka a prompt engineer. This is coming so quickly that this future could be Tuesday. The heterogeneous nature of data, data producers, applications, and services that drives everyone to be a developer also highlights the importance of developers. We need to build applications and other things since there are so many diverse applications and systems that need to be joined together to solve an entire real-world requirement. ... The number of activities a developer has to do in modern development today goes beyond just designing, creating, building, testing, and deploying applications. Often in today’s resource-constrained environments, a common additional role is to gather and translate user requirements into buildable assets. Responsibilities also include internationalization, monitoring, managing, extracting data, and more.



Quote for the day:

“When people are financially invested, they wanta return. When people are emotionally invested, they want to contribute.” -- Simon Sinek

Daily Tech Digest - July 24, 2023

A CDO Call To Action: Stop Hoarding Data—Save The Planet

Many of the implications of rampant data hoarding are reasonably well known—including the potential compliance and privacy risks associated with storing petabytes of data you know absolutely nothing about. For many companies, "don’t ask, don’t tell" seems to be the approach when it comes to ensuring compliant management of their dark data—or at the very least, ignoring dark data represents a business risk many compliance officers seem willing to take. Other implications on the cost of storing dark data, or the potential value that could be unlocked by operationalizing it, are also often discussed. For many CDOs, the motivation to store troves of data that may never get used is a form of FOMO, where the fear of being unable to support a future request for new analytical insights outweighs the cost of data storage. In these situations, the unwillingness of many CDOs to apply methods to measure the business value of data is a primary enabler of data hoarding, where the idea that "we might need it someday" is sufficient to drive millions in annual revenues for cloud service providers.


Google, Microsoft, Amazon, Meta Pledge to Make AI Safer and More Secure

Meta said it welcomed the White House agreement. Earlier this week, the company launched the second generation of its AI large language model, Llama 2, making it free and open source. "As we develop new AI models, tech companies should be transparent about how their systems work and collaborate closely across industry, government, academia and civil society," said Nick Clegg, Meta's president of global affairs. The White House agreement will "create a foundation to help ensure the promise of AI stays ahead of its risks," Brad Smith, Microsoft vice chair and president, said in a blog post. Microsoft is a partner on Meta's Llama 2. It also launched AI-powered Bing search earlier this year that makes use of ChatGPT and is bringing more and more AI tools to Microsoft 365 and its Edge browser. The agreement with the White House is part of OpenAI's "ongoing collaboration with governments, civil society organizations and others around the world to advance AI governance," said Anna Makanju, OpenAI vice president of global affairs.


UN Security Council discusses AI risks, need for ethical regulation

During the meeting, members stressed the need to establish an ethical, responsible framework for international AI governance. The UK and the US have already started to outline their position on AI regulation, while at least one arrest has occurred in China this year after the Chinese government enforced new laws relating to the technology. Malta is the only current non-permanent council member that is also an EU member state and would therefore be governed by the bloc’s AI Act, the draft of which was confirmed in a vote last month. Although AI can bring huge benefits, it also poses threats to peace, security and global stability due to its potential for misuse and its unpredictability — two essential qualities of AI systems, Clark said in comments published by the council after the meeting. “We cannot leave the development of artificial intelligence solely to private-sector actors,” he said, adding that without investment and regulation from governments, the international community runs the risk of handing over the future to a narrow set of private-sector players.


How to Choose Carbon Credits That Actually Cut Emissions

Risk is the biggest driver in business and — with trillions of dollars in annual climate-related costs and damage – the climate crisis is fast becoming a business crisis. Corporations must act now to minimize losses, illustrate meaningful climate action to shareholders and comply with fast-approaching climate regulations. Carbon credits are an important approach to scaling climate action globally and are a fast-growing strategy for delivering on corporate ESG goals. While these offsets are part of nearly every scenario that keeps global warming to 1.5 degrees Celsius, legacy carbon markets lack broad public trust: Impactful carbon solutions require clear guidelines and proven, verifiable data. ... This is an all-hands-on-deck moment. We must engage proven, reliable, and equitable methods to meet what may be the greatest threat to the future of humanity and the planet we inhabit. Carbon credits, when implemented responsibly and at scale, can be a very effective tool for humanity to use in the fight to limit the damages from climate change. 


BGP Software Vulnerabilities Overlooked in Networking Infrastructure

At the heart of the vulnerabilities was message parsing. Typically, one would expect a protocol to check that a user is authorized to send a message before processing the message. FRRouting did the reverse, parsing before verifying. So if an attacker could have spoofed or otherwise compromised a trusted BGP peer's IP address, they could have executed a denial-of-service (DoS) attack, sending malformed packets in order to render the victim unresponsive for an indefinite amount of time. ... "Originally, BGP was only used for large-scale routing — Internet service providers, Internet exchange points, things like that," dos Santos says. "But especially in the last decade, with the massive growth of data centers, BGP is also being used by organizations to do their own internal routing, simply because of the scale that has been reached," to coordinate VPNs across multiple sites or data centers, for example. More than 317,000 Internet hosts have BGP enabled, most of them concentrated in China (around 92,000) and the US (around 57,000). Just under 2,000 run FRRouting — though not all, necessarily, with BGP enabled — and only around 630 respond to malformed BGP OPEN messages.


How IT leaders are driving new revenue

CIOs who are driving new revenue are: Delivering technologies designed to meet specific business outcomes. For example, Narayaran has seen CIOs focus their teams on creating applications designed not merely on high availability and reliability but on hitting very specific business goals — such as enabling on-time deliveries to its customers. Unlocking data’s potential. Narayaran says he has also seen CIOs make big plays with their data programs, investing in the technology infrastructure needed to bring together and analyze data sets to create new services or products and drive business objectives such as improved customer retention and customer stickiness. Co-creating with their business unit colleagues. Notably, Narayaran says CIOs are approaching their business unit colleagues with such proposals. “CIOs [are saying], ‘Here’s an opportunity. We have this data, and we can make this data do this for you,’ and they then bring that to life. And if they say, ‘This is what we have and this is what we can do,’ then the business, too, can come up with new ideas.”


Data centers grapple with staffing shortages, pressure to reduce energy use

A consistent issue for the past decade, attracting and retaining new talent will continue to challenge data center leaders, according to Uptime. About two-thirds of survey respondents said they have “problems recruiting or retaining staff,” but the trend appears to be stabilizing as it hasn’t increased over last year’s data. According to the report: More than one third (35%) of respondents say their staff is being hired away, which is more than double the 2018 figure of 17%. And many believe operators are poaching from within the sector, with 22% of respondents reporting that they lost staff to their competitors. Staffing challenges are highest among operations management staff and those specializing in mechanical and electrical trades, as well as with junior level staff. “It's been challenging the data center industry for about a decade. It has been escalating in recent years. Our survey data this year suggests that it may, at least this year, not be getting worse, maybe stabilizing. And poaching is a problem of people who do get qualified applicants into jobs – they do find them hired away,” said Jacqueline Davis, research analyst at Uptime Institute.


Why API attacks are increasing and how to avoid them

First, exposing APIs to network requests significantly increases the attack surface, says Johannes Ullrich, dean of research at the SANS Technology Institute. “An attack no longer needs access to the local system but can attack the API remotely,” he says. Even worse, APIs are designed to be easy to find and use, Ullrich says. They’re “self-documenting” and are typically based on common standards. That makes them convenient for developers, but also prime targets for hackers. Since APIs are designed to help applications talk to one another, they often have access to core company data, such as financial information or transaction records. It’s not only the data itself that’s at risk. The API documentation can also give outsider insights into business logic, says Ullrich. “This insight may make finding weaknesses in the business process easier.” Then there’s the quantity issue. Companies deploying cloud-based applications no longer deploy a single monolithic application with a single access point in and out. 


Journey to Quantum Supremacy: First Steps Toward Realizing Mechanical Qubits

Research and development in this field are growing at astonishing paces to see which system or platform outruns the other. To mention a few, platforms as diverse as superconducting Josephson junctions, trapped ions, topological qubits, ultra-cold neutral atoms, or even diamond vacancies constitute the zoo of possibilities to make qubits. So far, only a handful of qubit platforms have demonstrated to have the potential for quantum computing, marking the checklist of high-fidelity controlled gates, easy qubit-qubit coupling, and good isolation from the environment, which means sufficiently long-lived coherence. ... The realization of a mechanical qubit is possible if the quantized energy levels of a resonator are not evenly spaced. The challenge is to keep the nonlinear effects big enough in the quantum regime, where the oscillator’s zero point displacement is minuscule. If this is achieved, then the system may be used as a qubit by manipulating it between the two lowest quantum levels without driving it in higher energy states.


ChatGPT Amplifies IoT, Edge Security Threats

ChatGPT and its ilk are rapidly appearing integrated or embedded in commercial and consumer IoT of all types. Many imagine AI models to be the most sophisticated security threat to date. But most of what is imagined is indeed imaginary. “Now, if an actual AI emerges, be very worried if the kill switch is very far away from humans,” says Jayendra Pathak, chief scientist at SecureIQLab. He, like others in security and AI, agree that the chances of an actual general artificial intelligence developing any time soon are still very low. But as to the latest AI sensation, ChatGPT, well that’s another kind of scare. “ChatGPT poses [insider] threats -- similar to the way rogue or ‘all-knowing employees’ pose -- to IoT. Some of the consumer IoT vulnerabilities pose the same risk as a microcontroller or microprocessor does,” Pathak says. In essence, ChatGPT’s potential threats spring from its training to be helpful and useful. Such a rosy prime directive can be very harmful, however. 



Quote for the day:

"No man can stand on top because he is put there." -- H. H. Vreeland

Daily Tech Digest - July 23, 2023

Sustainable Computing - With An Eye On The Cloud

There are two parts to sustainable goals: 1. How do cloud service providers make their data centers more sustainable?; 2. What practices can cloud service customers practice to better align with the cloud and make their workloads more sustainable? Let us first look at the question of how businesses should be planning for sustainability. How should they bake in sustainability aspects as part of their migration to the cloud? The first aspect to consider, of course, is choosing the right cloud service provider. It is essential to select a carbon-thoughtful provider based on its commitment to sustainability as well as how it plans, builds, powers, operates, and eventually retires its physical data centers. The next aspect to consider is the process of migrating services to an infrastructure-as-a-service deployment model. Organizations should carry out such migrations without re-engineering for the cloud, as this can help to drastically reduce energy and carbon emissions as compared to doing so through an on-premise data center. 


The Intersection of AI and Data Stewardship: A New Era in Data Management

In addition to improving data quality, AI can also play a crucial role in enhancing data security and privacy. With the increasing number of data breaches and growing concerns around data privacy, organizations must ensure that their data is protected from unauthorized access and misuse. AI can help organizations identify potential security risks and vulnerabilities in their data infrastructure and implement appropriate measures to safeguard their data. Furthermore, AI can assist in ensuring compliance with various data protection regulations, such as the General Data Protection Regulation (GDPR), by automating the process of identifying and managing sensitive data. Another area where AI and data stewardship intersect is in data governance. Data governance refers to the set of processes, policies, and standards that organizations use to ensure the proper management of their data assets. AI can help organizations establish and maintain robust data governance practices by automating the process of creating, updating, and enforcing data policies and rules. 


Saga Pattern With NServiceBus in C#

In its simplest form, a saga is a sequence of local transactions. Each transaction updates data within a single service, and each service publishes an event to trigger the next transaction in the saga. If any transaction fails, the saga executes compensating transactions to undo the impact of the failed transaction. The Saga Pattern is ideal for long-running, distributed transactions where each step needs to be reliable and reversible. It allows us to maintain data consistency across services without the need for distributed locks or two-phase commit protocols, which can add significant complexity and performance overhead. ... The Saga Pattern is a powerful tool in our distributed systems toolbox, allowing us to manage complex business transactions in a reliable, scalable, and maintainable way. Additionally, when we merge the Saga Pattern with the Event Sourcing Pattern, we significantly enhance traceability by constructing a comprehensive sequence of events that can be analyzed to comprehend the transaction flow in-depth.


Efficiency and sustainability in legacy data centers

A recent analyst report found a “wave of technological trends” is driving change throughout the data center sector at an unprecedented pace, with “rapidly diversifying business applications generating terabytes of data.” All that data has to go somewhere, and as hyperscale cloud providers push some of their workloads away from large, CapEx-intensive centralized hubs into Tier II and Tier III colocation markets — it’s looking like colos may be in greater demand than ever before. However, these circumstances pose a serious challenge for the colocation sector, as “the resulting workloads have exploded onto legacy data center infrastructures”, many of which may be “ill-equipped to handle them.” Now, the colocation market finds itself caught between two conflicting macroeconomic forces. On one hand, the growth in demand puts greater pressure on operators in Tier II and III markets to build more facilities, faster, to accommodate larger and more complex workloads; on the other, the existential need to reduce carbon emissions and slash energy consumption is vital.


A quantum computer that can’t be simulated on classical hardware could be a reality next year

The current-generation machines are still very much in the noisy era of quantum computing, Ilyas Khan, who founded Cambridge Quantum out of the University of Cambridge in 2014 and now works as chief product officer, told Tech Monitor that we’re moving into the “mid-stage NISQ” where the machines are still noisy but we’re seeing signs of logical qubits and utility. Thanks to error correction, detection and mitigation techniques, even on noisy error-prone qubits, many companies have been able to produce usable outcomes. But at this stage, the structure and performance of the quantum circuits could still be simulated using classical hardware. That will change next year, says Khan. “We think it’s important for quantum computers to be useful in real-life problem solving,” he says. “Our current system model, H2, has 32 qubits in its current instantiation, all to all connected with mid-circuit measurement.”


Protecting your business through test automation

The inadequate pre-launch testing forces teams to then scramble post-launch to fix faulty software applications with renewed urgency, with the added pressure of managing the potential loss of revenue and damaged brand reputation caused by the defect. When the faulty software reaches end users, dissatisfied customers are a problem that could have far longer-reaching effects as users pass on their negative experiences to others. The negative feedback could also prevent potential new customers from ever trying the software in the first place. So why is software not being tested properly? Changing customer behaviours in the financial services sector, as well as increased competition from digital-native fintech start-ups, have led many organisations to invest in a huge amount of digital transformation in recent years. With companies coming under more pressure than ever to respond to market demands and user experience trends through increasingly frequent software releases, the sheer volume of software needing testing has skyrocketed, placing a further burden on resources already stretched to breaking point.


Implementing zero trust with the Internet of Things (IoT)

There’s a strongly held view that it simply isn’t possible to trust any IoT device, even if it’s equipped with automatic security updating. “As a former CIO, my guidance is that preparation is the best defense,” Archundia tells ITPro. IoT devices are often just too much of a risk; they’re too much of a soft entry point into the organization to overlook them. It’s best to assume each device is a hole in an enterprise’s defenses. Perhaps each device won’t be a hole at all times, but some may be for at least some of the times. So long as the hole isn’t plugged, it can be found and exploited. That’s actually fine in a zero trust environment, because it assumes every single act, by a human or a device, could be malicious. ... “Because zero trust focuses on continuously verifying and placing security as close to each asset as possible, a cyber attack need not have far-reaching consequences in the organization,” he says. “By relying on techniques such as secured zones, the organization can effectively limit the blast radius of an attack, ensuring that a successful attack will have limited benefits for the threat agent.”


US Data Privacy Relationship Status: It’s Complicated

The American Data Privacy and Protection Act (ADPPA) is a bill that if passed would become the first set of federal privacy regulations that would supersede state laws. While it passed a House of Representatives commerce committee vote by a 53-2 margin in July 2022, the bill is still waiting on a full House vote and then a Senate vote. In the US, 10 states have enacted comprehensive privacy laws, including California, Colorado, Connecticut, Indiana, Iowa, Montana, Tennessee, Texas, Utah, and Virginia. More than a dozen other states have proposed bills in various states of activity. The absence of an overarching federal law means companies must pick and choose based on where they happen to be doing business. Some businesses opt to start with the most stringent law and model their own data privacy standards accordingly. The current global standard for privacy is Europe’s 2018 General Data Protection Regulation (GDPR) and has become the model for other data privacy proposals. Since many large US companies do business globally, they are very familiar with GDPR. 


KillNet DDoS Attacks Further Moscow's Psychological Agenda

Mandiant's assessment of the 500 DDoS attacks launched by KillNet and associated groups from Jan. 1 through June 20 offers further evidence that the collective isn't some grassroots assembly of independent, patriotic hackers. "KillNet's targeting has consistently aligned with established and emerging Russian geopolitical priorities, which suggests that at least part of the influence component of this hacktivist activity is intended to directly promote Russia's interests within perceived adversary nations vis-a-vis the invasion of Ukraine," Mandiant said. Researchers said KillNet and its affiliates often attack technology, social media and transportation firms, as well as NATO. ... To hear KillNet's recounting of its attacks via its Telegram channel, these hacktivists are nothing short of devastating. The same goes for other past and present members of the KillNet collective, including KillMilk, Tesla Botnet, Anonymous Russia and Zarya. Recent attacks by Anonymous Sudan have involved paid cloud infrastructure and had a greater impact, although it's unclear if this will become the norm.


Agile vs. Waterfall: Choosing the Right Project Methodology

Choosing the right project management methodology lays the foundation for effective planning, collaboration, and delivery. Failure to select the appropriate methodology can lead to many challenges and setbacks that can hinder project progress and ultimately impact overall success. Let's delve into why it's crucial to choose the right project management methodology and explore in-depth what can go wrong if an unsuitable methodology is employed. ... The right methodology enables effective resource allocation and utilization. Projects require a myriad of resources, including human, financial, and technological. If you select an inappropriate methodology, you can experience inefficient resource management, causing budget overruns, underutilization of skills, and time delays. For instance, an Agile methodology that relies heavily on frequent collaboration and iterative development may not be suitable for projects with limited resources and a hierarchical team structure.



Quote for the day:

"People leave companies for two reasons. One, they don't feel appreciated. And two, they don't get along with their boss." -- Adam Bryant