Daily Tech Digest - September 08, 2023

Peril vs. Promise: Companies, Developers Worry Over Generative AI Risk

One widespread concern over AI is that the systems will replace developers: 36% of developers worry that they will be replaced by an AI system. Yet the GitLab survey also gave more weight to arguments that disruptive technologies result in more work for people: Nearly two-thirds of companies hired employees to help manage AI implementations. Part of the concern seem to be generational. More experienced developers tend not to accept the code suggestions made by AI systems, while more junior developers are more likely to accept them, Lemos says. Yet both are looking to AI to assist them with the most boring work, such as documentation and creating unit tests. "I'm seeing a lot more developers raising the idea of having their documentation written by AI, or having test coverage written by AI, because they care less about the quality of that code, but just that the test works," he says. "There's both a security and a development benefit in having better test coverage, and it's something that they don't have to spend time on."


Feds Urge Immediately Patching of Zoho and Fortinet Products

CISA found that beginning in January, multiple APT groups separately exploited two different critical vulnerabilities to gain unauthorized access and exfiltrate data from the organization. Both of the unrelated flaws - CVE-2022-47966 in Zoho ManageEngine and CVE-2022-42475 in Fortinet FortiOS SSL VPN - have been classified as being of critical severity, meaning they can be exploited to remotely execute code, allowing attackers to take control of the system and pivot to other parts of the network. Each of the vendors issued updates patching their flaws in late 2022. Researchers refer to these as N-day vulnerabilities, meaning known flaws, as opposed to zero-day vulnerability for which no patch is yet available. The alert, issued by CISA, the FBI and U.S. Cyber Command's Cyber National Mission Force, includes details of how attackers used each of the flaws to gain wider access to victims' networks. The advisory doesn't state which nation or nations' APT groups have been tied to known exploits of these flaws. 


Scrum Master Skills We Rarely Talk About: Change Management

The initial stride towards constructing a "compelling case for change" is the vision of the type of Organization we aspire to become. It's crucial to emphasize that the organization's mode of operation should never serve as the ultimate goal in itself. Rather, it serves as a supplementary element that "enables" the organization in the pursuit of its objectives. This, in turn, gives rise to the necessity for change, marking the starting point of the entire process. A clearly expressed need for change (or the response to the question "Why exactly?") opens the gateway to the subsequent consideration: how should our Organization function to realize its goals? This is what we refer to as the Ideal State. Once we've defined the Ideal State of the organization, we can precisely articulate the exact optimizations required, alongside the pivotal indicators we will employ to monitor our progress throughout the change process. The Optimization Goal acts as our compass, guiding the direction of change or indicating precisely what adjustments need to be made.


Cloud first is dead—cloud smart is what’s happening now

Cloud smart involves making the best use of cloud concepts whether they are on premises or off and fundamentally making the most rational choice of locality as part of the thinking. A cloud smart architectural approach is essential because it enables enterprises to optimize their on-premises IT infrastructure and leverage the benefits of the cloud as well. With cloud smart architecture, enterprises can design and deploy highly available, scalable, and resilient solutions that have cloud operating characteristics to adapt to their changing business needs. After the initial rush to public cloud, this belated dose of reality is a positive. It reflects the recognition that there needs to be a smarter balance right between what's on premises vs. what's in the public cloud. Knowing how to strike the right balance—with the understanding that not every application is meant for the cloud—can ensure that you optimize performance, reliability, and cost, driving better long-term outcomes for your organization.


Are We Ready for a World Without Passwords?

Passwordless authentication simply means eliminating passwords. FIDO Alliance introduced FIDO2, a universally accepted authentication protocol offering frictionless, phishing-resistant, passwordless authentication. FIDO2 allows users to authenticate a web, SaaS, or mobile application using native device biometrics or PIN from their laptop, desktop or mobile phone. The user can access any application with a simple swipe on the fingerprint reader, a face nod to the camera or by entering a static PIN on their device. FIDO2 passwordless authentication is MFA by default and phishing resistant since the attacker needs physical access to the device and also access to the user’s PIN or biometrics. FIDO2 uses cryptographic keys (public and private) where the private key and the user’s biometric data do not leave the user’s device, thereby protecting the user’s privacy. It also prevents user activity tracking across services since a unique set of credentials is generated for each service. 


Is Security a Dev, DevOps or Security Team Responsibility?

Security is not the job of any one group or type of role. On the contrary, security is everyone’s job. Forward-thinking organizations must dispense with the mindset that a certain team “owns” security, and instead embrace security as a truly collective team responsibility that extends across the IT organization and beyond. After all, there is a long list of stakeholders in cloud security, including: Security teams, who are responsible for understanding threats and providing guidance on how to avoid them; Developers, who must ensure that applications are designed with security in mind and that they do not contain insecure code or depend on vulnerable third-party software to run; ITOps engineers, whose main job is to manage software once it is in production and who therefore play a leading role both in configuring application-hosting environments to be secure and in monitoring applications to detect potential risks; DevOps engineers, whose responsibilities span both development and ITOps work, placing them in a position to secure code during both the development and production stages.


Windows desktop apps are the future (with or without Windows)

Microsoft is betting big on this with Windows 365. Currently available only for businesses, Windows 365 is a Windows desktop-as-a-service hosted by Microsoft. Businesses can set up their employees with remotely accessed Windows desktops. Those employees can access them through nearly any device: a Chromebook, Mac, iPad, Android tablet, smart TV, smartphone, or whatever — even from a PC. Microsoft is building better support for accessing Windows 365 desktops into Windows 11, letting you flip between your cloud PC and local PC from the “Task View” button on your taskbar or even boot straight to a Windows 365 cloud PC desktop on a physical Windows 11 PC. While this is only for businesses at the moment, internal documents show Microsoft is working on Windows 365 cloud PC plans for home users. It’s not just about Microsoft, either. Even Google now has a new solution for running Windows apps natively in ChromeOS called “ChromeOS Virtual App Delivery.” 


How Failures Lead to Innovation

When failure occurs, not giving up or abandoning your idea is essential. Instead, look at the problem differently and find a new solution. This process involves a series of steps that, when combined, can lead to groundbreaking innovation. First, there’s a need to reassess your vision and redefine your objectives. What was the original goal? Is it still relevant, or does the failure open up a new direction that could be more beneficial? Second, identify the root cause of the failure and understand its implications. This is where a deep dive into the details is crucial. In doing so, you might uncover overlooked opportunities or hidden insights. Third, brainstorm new solutions. Use the knowledge from the failure to think of innovative approaches or strategies that could work better. Fourth, prototype and test these new ideas. Not every new idea will be successful, but through prototyping and testing, you’ll get closer to finding a solution that works. Fifth, iterate on the process. Innovation is rarely a one-off event. It’s a continuous learning process, designing, testing, and refining.


Velocity Over Speed, A Winner Every Time

Precision Bias is the utterly false belief we can predict any time length ever. No one saw covid coming. So, every damn prediction at the time did not come true. And while most delays are not caused by such global meltdowns, they still happen. But the addiction to speed itself is one of the largest factors in slowing down our delivery times. To understand velocity, we have to understand value. Both intangible value and direct value. I call this ‘soaking in numbers’. When I am with a new client (read my article on clients vs. customers) I like to read here and learn every value metric they find important. I want mean time to recover. I want the number of new customers per day. I want net promoter scores, profitability, lead times, partner surveys, employee turnover, all of it. These are the language of value that a set of stakeholders uses to describe value. Notice how few of those measures involve speed numbers? I guesstimate that only 10-15 % of any set of measures will be speed related. In fact, speed will cause many of those metrics to fail. Too many new hires, too many orders, too many acquisitions.


How to Succeed with Unifying DataOps and MLOps Pipelines

How to actually integrate data and ML pipelines depends on an organization’s existing overall structure. “Organizations are essentially either centralized or decentralized,” Kobielus said. For those that are already centralized to one degree or another, unifying data and ML pipelines is really just a question of converging the existing back ends -- often in the form of a data lakehouse. In the case of a more decentralized organization, Kobielus explained, unification of the different back ends requires an abstraction layer that enables users to query data in a uniform, simplified way across all the disparate environments where it may reside. For many organizations, this layer is taking the form of a data mesh or a data fabric that consolidates access to data and analytics across a range of environments. “The bottom line for success,” Kobielus said, “is to what extent you can build more monetizable data and analytics and the degree to which you can automate all of it. That automation needs to happen on the back end.” 



Quote for the day:

"If you set your goals ridiculously high and it's a failure, you will fail above everyone else's success." --James Cameron

Daily Tech Digest - September 06, 2023

Open Source Needs Maintainers. But How Can They Get Paid?

The data show that not only are open source maintainers usually unaware of current security tools and standards, like software bills of materials (SBOMs) and supply-chain levels for software artifacts (SLSA), but they are largely unpaid and, to a frightening degree, on their own. A study released in May by Tidelift found that 60% of open source maintainers would describe themselves as “unpaid hobbyists.” And 44% of all maintainers said they are the only person maintaining a project. “Even more concerning than the sole maintainer projects are the zero maintainer projects, of which there are a considerable amount as well that are widely used,” Donald Fischer, CEO and co-founder of Tidelift, told The New Stack. “So many organizations are just unaware because they don’t even have telemetry, they have no data or visibility into that.” ... An even bigger threat to continuity in open source project maintenance is the “boss factor,” according to Fischer. The boss factor, he said, emerges when “somebody gets a new job, and so they don’t have as much time to devote to their open source projects anymore, and they kind of let them fall by the wayside.”


Your data is critical – do you have the right strategy in place for resilience?

Recovering multi-master databases requires specialist skills and understanding to prevent problems around concurrency. In effect, this means having one agreed list of transactions rather than multiple conflicting lists that might contradict each other. Similarly, you have to ensure that any recovery brings back the right data, rather than any corrupted records. Planning ahead on this process makes it much easier, but it also requires skills and experience to ensure that DR processes will work effectively. Alongside this, any DR plan will have to be tested to prove that it will work, and work consistently when it is most needed. Any plan around data has to take three areas into account – availability, restoration and cost. Availability planning covers how much work the organisation is willing to do to keep services up and running, while restoration covers how much time and data has to be recovered in the event of a disaster. Lastly, cost covers the amount of budget available to cover these two areas, and how much has to be spent in order to meet those requirements.


7 tough IT security discussions every IT leader must have

Cybercriminals never sleep; they’re always conniving and corrupting. “When it comes to IT security strategy, a very direct conversation must be held about the new nature of cyber threats,” suggests Griffin Ashkin, a senior manager at business management advisory firm MorganFranklin Consulting. Recent experience has demonstrated that cybercriminals are now moving beyond ransomware and into cyberextortion, Ashkin warns. “They’re threatening the release of personally identifiable information (PII) of organization employees to the outside world, putting employees at significant risk for identity theft.” ... The meetings and conversations should lead to the development or update of an incident response plan, he suggests. The discussions should also review mission-critical assets and priorities, assess an attack’s likely impact, and identify the most probable attack threats. By changing the enterprise’s risk management approach from matrix-based measurement (high, medium, or low) to quantitative risk reduction, you’re basing actual potential impact on as many variables as needed, Folk says.


Emerging threat: AI-powered social engineering

As malicious actors gain the upper hand, we could potentially find ourselves stepping into a new era of espionage, where the most resourceful and innovative threat actors thrive. The introduction of AI brings about a new level of creativity in various fields, including criminal activities. The crucial question remains: How far will malicious actors push the boundaries? We must not overlook the fact that cybercrime is a highly profitable industry with billions at stake. Certain criminal organizations operate similarly to legal corporations, having their own infrastructure of employees and resources. It is only a matter of time before they delve into developing their own deepfake generators (if they haven’t already done so). With their substantial financial resources, it’s not a matter of whether it is feasible but rather whether it will be deemed worthwhile. And in this case, it likely will be. What preventative measures are currently on offer? Various scanning tools have emerged, asserting their ability to detect deepfakes.


Scrum is Not Agile Enough

Scrum thrives in scenarios where the project’s requirements might evolve or where customer feedback is crucial because of its short sprints. It works well when a team can commit to the roles, ceremonies, and iterative nature of the framework. When there is a need for clear accountability and communication among team members, stakeholders, and customers, Scrum works better than Kanban which works on a less rigid task allocation. The problem is the scale at which Scrum is used. While there is some consensus on the strengths of the methodology, it is not applicable for all projects. One common situation engineers face is, in teams which build multiple applications, individuals can’t start a new story until all the ongoing stories are complete. The team members who’ve completed remain idle until each of them have finished their story, which is entirely inefficient. Long meetings are another pain point for users, there’s a substantial investment in planning and meetings. Significant time is allocated to discussing stories that sometimes require only 30 minutes for completion. 


Technology Leaders Can Turbocharge Their Company’s Growth In Five Ways

Some growth will be powered by new technologies; CIOs and other technology leaders can demonstrate how emerging technologies create specific growth opportunities. Instead of pitching random acts of metaverse or blockchain, which require radical changes in life or trade to matter, technology leaders can iterate on new technologies and infuse ideas from these into their own products. ... Outcomes of all kinds can always be improved — AI is just the newest tool in the improvement toolkit, joining analytics, automation and software. Personalization at scale is a good example of amplifying growth. Technology leaders should collaborate with marketing colleagues and mine databases to find better purchase signals that improve offers and outreach. They can also automate processes to streamline onboarding and improve revenue recognition. ... No technology leader and no company will do this alone. They will work with technology and service providers to build and operate the new capabilities, including those powered by generative AI.


Proposed SEC Cybersecurity Rule Will Put Unnecessary Strain on CISOs

In its current form, the proposed rule leaves a lot of room for interpretation, and it's impractical in some areas. For one, the tight disclosure window will put massive amounts of pressure on chief information security officers (CISOs) to disclose material incidents before they have all the details. Incidents can take weeks and sometimes months to understand and fully remediate. It is impossible to know the impact of a new vulnerability until ample resources are dedicated to remediation. CISOs may also end up having to disclose vulnerabilities that, with more time, end up being less of an issue and therefore not material. ... Another issue is the proposal's requirement to disclose circumstances in which a security incident was not material on its own but has become so "in aggregate." How does this work in practice? Is an unpatched vulnerability from six months ago now in scope for disclosure (given that the company didn't patch it) if it's used to extend the scope of a subsequent incident? We already conflate threats, vulnerabilities, and business impact.


Contending with Artificially Intelligent Ransomware

Deploying a malicious payload onto a targeted computer is a very complex task. It’s not a static executable that can be easily detected based on signatures. AI could generate a customized payload for each victim, progressively advancing within compromised systems with patience and precision. The key for successful malware lies in emulating normal, expected behavior to avoid triggering any defensive measures, even from vigilant users themselves. We’re witnessing genuinely authentic-looking software emerging in various distributions, ostensibly offering specific functionalities while harboring ulterior motives to earn users’ trust, eventually acting with a malicious intent. In this context, AI is entirely capable of streamlining the process, crafting software with dormant malicious capabilities primed for activation at a later point, possibly during the next update.


3 types of incremental forever backup

The first type of incremental forever backup is a file-level incremental forever backup product. This type of approach has actually been around for quite some time, with early versions of it available in the ‘90s. The reason why this is called a file-level incremental is that the decision to backup an item happens at the file level. If anything within a file changes, it will change its modification date , and the entire file will be backed up. ... Another incremental forever backup approach is block-level incremental forever. This method is similar to the previous method in that it will perform one full backup and a series of incremental backups – and will never again perform a full backup. In a block-level incremental backup approach, the decision to back up something will happen at the bit or block level. ... The final type of incremental forever backup is called source deduplication backup software, which performs the deduplication process at the very beginning of the backup. It will make the decision at the backup client as to whether or not to transfer a new chunk of data to the backup system.


The Future of Work is Remote: How to Prepare for the Security Challenges

When embracing hybrid or remote work, the lack of in-person contact among staff may have a less-than-ideal effect on corporate culture. For those “forced back” to the office, disgruntlement will breed resentment. In both cases, disengagement between staff and their employer will have an adverse effect on their attitudes toward the company and, consequently, heighten the risk of insider threats, either by accident, judgment errors or malicious intent. ... New security technology can streamline and bolster defenses but often falls short. Without human interaction and experience, these systems lack the context to make accurate decisions. As a result, they may generate false positives or miss real threats. Security technology is often designed to work with little or no human input, which can lead to problems when the system encounters something it doesn’t understand; for example, a new type of malware or a sophisticated attack. Security systems need to be regularly updated otherwise, they’re at risk of becoming obsolete. 



Quote for the day:

"Never say anything about yourself you do not want to come true." -- Brian Tracy

Daily Tech Digest - September 05, 2023

GenAI in productivity apps: What could possibly go wrong?

The first and most obvious risk is the accuracy issue. Generative AI is designed to generate content — text, images, video, audio, computer code, and so on — based on patterns in the data it’s been trained on. Its ability to provide answers to legal, medical, and technical questions is a bonus. And in fact, often the AIs are accurate. The latest releases of some popular genAI chatbots have passed bar exams and medical licensing tests. But this can give some users a false sense of security, as when a couple of lawyers got in trouble by relying on ChatGPT to find relevant case law — only to discover that it had invented the cases it cited. That’s because generative AIs are not search engines, nor are they calculators. They don’t always give the right answer, and they don’t give the same answer every time. For generating code, for example, large language models can have extremely high error rates, said Andy Thurai, an analyst at Constellation Research. “LLMs can have rates as high as 50% of code that is useless, wrong, vulnerable, insecure, and can be exploited by hackers,” he said. 


CFOs and IT Spending: Best Practices for Cost-Cutting

Auvik Networks’ Feller stressed it is important for CFOs not to come in and start slashing everything. “There was a reason why IT applications and services were purchased in the first place and, in today’s corporate environment, many of these systems are integrated with each other and into employees’ work processes,” he says. “CIOs should have a good idea of what’s critical and sensitive.” He says the way he tends to approach this is by working with the CIO to identify the applications that are main “sources of truth” for key corporate data. These tend to be the financial and accounting systems or enterprise resource planning (ERP), customer relationship management (CRM), human resources information system (HRIS), and often a business intelligence (BI) system. “For each of those key systems, we evaluate whether they are still the right choice for where the company has evolved and will they scale as the company grows,” he says. “Replacing one or more of those systems can be a big, complicated project but is often essential to a company’s success.”


Hackers Adding More Capabilities to Open Source Malware

Researchers observed that the malware samples are currently being used by multiple threat actors and various variants of this threat are already in the wild with threat actors improving its efficiency and effectiveness over time. The malware is capable of stealing sensitive information from infected systems including host information, screenshots, cached browser credentials and files stored on the system that match a predefined list of file extensions. It also attempts to determine the presence of credential databases for browser applications includin Chrome, Yandex, Edge and Opera. Once executed, the malware creates a working directory, and a file grabber executes and attempts to locate any files stored within the victim's Desktop folder that match a list of file extensions including .txt, .pdf, .doc, .docx, .xml, .img, .jpg and .png. The malware then creates a compressed archive called log.zip containing all of the logs and the data is transmitted to the attacker via Simple Mail Transfer Protocol "using credentials defined in the portion of code responsible for crafting and sending the message."


Connected cars and cybercrime: A primer

Connected car cybercrime is still in its infancy, but criminal organizations in some nations are beginning to recognize the opportunity to exploit vehicle connectivity. Surveying today’s underground message forums quickly reveals that the pieces could quickly fall into place for more sophisticated automotive cyberattacks in the years ahead. Discussions on underground crime forums around data that could be leaked and needed/available software tools to enable attacks are already intensifying. A post from a publicly searchable auto-modders forum about a vehicle’s multi-displacement system (MDS) for adjusting engine performance, is symbolic of the current activity and possibilities. Another, in which a user on a criminal underground forum offers a data dump from car manufacturer, points to the possible threats that likely are coming to the industry. Though they still seem to be limited to accessing regular stolen data, compromises and network accesses are for sale in the underground.


Identify Generative AI’s Inherent Risks to Protect Your Business

Generative AI models have basically three attack surfaces: the architecture of the model itself, the data it was trained on, and the data fed into it by end users. For example, adversarial attacks and data poisoning depend on the model’s training data having a security flaw and thus being open to manipulation and infiltration. This allows threat actors to inject incorrect or misleading information into the training data, which the model uses to generate responses, leading to inaccurate information presented as accurate by a trusted model and, subsequently, flawed decision-making. Model extraction attacks depend on the skill of the hacker to compromise the model itself. The threat actor queries the model to gain information about its structure and, therefore, determine the actions it executes and what its targets are. One goal of this sort of attack could be reverse-engineering the model’s training data, for instance, private customer data, or recreating the model itself for nefarious purposes. Notably, any of these attacks can take place before or after the model is installed at a user site. 


How attackers exploit QR codes and how to mitigate the risk

A common attack involves placing a malicious QR code in public, sometimes covering up a legitimate QR code, and when unsuspecting users scan the code they are sent to a malicious web page that could host an exploit kit, Sherman says. This can lead to further device compromise or possibly a spoofed login page to steal user credentials."This form of phishing is the most common form of QR exploitation," Sherman says. QR code exploitation that leads to credential theft, device compromise or data theft, and malicious surveillance are the top concerns to both enterprises and consumers, he says. If QR codes lead to payment sites, then users might divulge their passwords and other personal information that could fall into the wrong hands. "Many websites do drive-by download, so mere presence on the site can start malicious software download," says Rahul Telang, professor of information systems at Carnegie Mellon University’s Heinz College. 


The ‘IT Business Office’: Doing IT’s admin work right

Each IT manager has a budget to manage to. Sadly, in most companies budgeting looks more like a game of pin-the-tail-on-the-donkey than a well defined and consistent algorithm. In principle, a lot of IT staffing can be derived from a parameter-driven model. This can be hard to reconcile with Accounting’s requirements for budget development. With an IT Business Office to manage the relationship with Accounting, IT can explain its methods once, instead of manager-by-manager-by-manager. ... Business-wide, new-employee onboarding should be coordinated by HR, but more often each piece of the onboarding puzzle is left to the department responsible for that piece. An IT Business Office can’t and shouldn’t try to fix this often-broken process throughout the enterprise. But onboarding new IT employees is, if anything, even more complicated than onboarding anyone else’s employees. An IT Business Office can, if nothing else, smooth things out for newly hired IT professionals so they can start to work the day they show up for work.


MSSQL Databases Under Fire From FreeWorld Ransomware

According to an investigation by Securonix, the typical attack sequence observed for this campaign begins with brute forcing access into the exposed MSSQL databases. After initial infiltration, the attackers expand their foothold within the target system and use MSSQL as a beachhead to launch several different payloads, including remote-access Trojans (RATs) and a new Mimic ransomware variant called "FreeWorld," named for the inclusion of the word "FreeWorld" in the binary file names, a ransom instruction file named FreeWorld-Contact.txt, and the ransomware extension, which is ".FreeWorldEncryption." The attackers also establish a remote SMB share to mount a directory housing their tools, which include a Cobalt Strike command-and-control agent (srv.exe) and AnyDesk; and, they deploy a network port scanner and Mimikatz, for credential dumping and to move laterally within the network. And finally, the threat actors also carried out configuration changes, from user creation and modification to registry changes, to impair defenses.


Managing Data as a Product: What, Why, How

Applying product management principles to data includes attempting to address the needs of as many different potential consumers as possible. This requires developing an understanding of the consumer base. The consumers are typically in-house staff accessing the organization’s data. (The data is not being “sold,” but is being treated as a product available for distribution, by identifying the consumers’/in-house staff’s needs.) From a big-picture perspective, the business’s goal is to maximize the use of its in-house data. Managing data as a product requires applying the appropriate product management principles. ... The data as a product philosophy is an important feature of the data mesh model. Data mesh is a decentralized form of data architecture. It is controlled by different departments or offices – marketing, sales, customer service – rather than a single location. Historically, a data engineering team would perform the research and analytics, a process that severely limited research when compared to the self-service approach promoted by the data as a product philosophy, and the data mesh model.


Enterprise Architecture Must Look Beyond Venturing the Gap Between Business and IT

The architects should not be the ones managing and maintaining the repository by themselves. They should facilitate the rest of the organization to make sure that they can ask for a repository. Architecture needs to become part of every strategic and tactical role in your organization. I think EA is basically following the path that so many other industries and disciplines have followed already. It’s the path of democratization. Today, we all have our supercomputer in our pocket, meaning that we have more functionality than ever before. And we don’t even have to go to machine rule, we don’t even have to go to our desk anymore, we can just take it out of our pocket, and help us to make the right decisions of where we want to go, how we’re going to send an email, which decision we’re kind of making. This self-service way of doing that has really enabled organizations to be much more efficient, much more transparent, much more effective. And I think this is what we want to achieve with EA, as well.



Quote for the day:

“Just because you’re a beginner doesn’t mean you can’t have strength.” -- Claudio Toyama

Daily Tech Digest - September 04, 2023

What happens when finops finds bad cloud architecture?

Cloud finops teams can evaluate the performance and scalability of cloud infrastructure. Monitoring key performance indicators such as response times, latency, and throughput can identify bottlenecks or areas where the current architecture limits scalability and performance. Since finops normally tracks this through money spent, it’s easy to determine exactly how much architecture blunders are costing the company. It’s not unusual to find that a cloud-deployed system costs 10 times more money per month than it should. Those numbers are jarring for most businesses. Remember, all that money could have been spent in other places, such as on innovations. ... However, there are more strategic blunders, such as only using a single cloud provider (see example above). Maybe it seemed like a good idea at the time. Perhaps a vendor had a relationship with several board members, or there were political reasons for the limited choices. Unfortunately, the company still ends up with a great deal of technical debt which could have been avoided.


The quantum threat: Implications for the Internet of Things

Quantum computing, though it might be a decade or two away, presents a threat to IoT devices that have been secured against the current threat and which may remain in place for many years. To address this threat, governments are already spending billions, while organisations like NIST and ETSI are several years into programmes to identify and select post-quantum algorithms (PQAs) and industry and academia are innovating. And we are approaching some agreement on a suite of algorithms that are probably quantum safe; both the UK’s NCSC and the US’ NSA endorse the approach of enhanced Public Key cryptography using PQA along with much larger keys. The NCSC recommends that the majority of users follow normal cyber security best practice and wait for the development of NIST standards-compliant quantum-safe cryptography (QSC) products. That potentially leaves the IoT with a problem. Most of these enhanced QSC standards appear to require considerable computing power to deal with complex algorithms and long keys – and many IoT sensors may not be capable of running them.


What is industry cloud?

Industry cloud platforms allow businesses operating in the same sector to share or sell data, technologies, and processes to each other. The potential benefits can be significant, as an industry cloud enables interrelated members of a supply chain to access insights derived from potentially expanded data sets. An industry cloud can offer companies an exciting opportunity to exploit existing data they are not leveraging in a constructive way. ... Joining an industry cloud can offer significant benefits for companies, but many may reflexively balk at the idea of sharing or selling data. Consequently, it’s important that a company has a supportive constituency when considering an industry cloud. Each type of vendor has its own challenges in developing an industry cloud platform. For industry clouds driven by supply chain leaders, the most important requirement will be reexamining tools and methodologies to meet the needs of less sophisticated supply chain participants. Avoiding the temptation to abandon the industry cloud and retreat to a standard cloud for internal use is also a challenge.


Why Instagram Threads is a hotbed of risks for businesses

Threads is very easy to both download and sign up for, as it integrates seamlessly with a user's Instagram account when first signing up for the platform. However, this seamless integration could pose security risks, according to a blog from AgileBlue. Instagram, Facebook, and now Threads are all owned by Meta and for many users, each of their Meta accounts share the same login credentials between each of the platforms. "This makes it much easier for malicious actors to access information as gaining access to just one account ultimately gives them access to all Meta accounts," the blog said. In fact, as of writing, only users with an Instagram account can create a Threads account, so if an individual wants to sign up for Threads, they will first have to create an Instagram account. "If an employee's Threads account is compromised, malicious actors can impersonate the employee to gather information or spread misinformation within their close circle," Guenther says.


With BYOD comes responsibility — and many firms aren't delivering

Management must learn and share the benefits of these systems, make it crystal clear how data will be handled, and put protection in place to ensure personal data remains personal. Communication is critical here. It's also critical in securing the inevitable weak point of any form of security protection — the users themselves. With that in mind, companies should invest in training staff in security awareness and encourage them to update devices as and when those updates appear. Companies should also set standards — and devices that don’t meet those standards, in terms of security protection, should not gain access to corporate systems. This is all common sense stuff, really. We know the security environment is extremely challenging — even police forces are regularly hacked. In that context, it makes total sense to think about how to manage the devices connected to your systems and to put in place the software, security, and user education it takes to protect your business environments. The cost of device management is relatively negligible compared to the consequences of a successful ransomware attack, after all.


Why Enterprise Architecture Must Drive Sustainable Transformation

To some, it may seem odd to present these as parallel, equivalent pressures on businesses. Surely, the continued viability of civilization as we know it should far outweigh any governmental or regulatory proposal in our thinking about the future? The importance of the changing regulatory environment, however, lies not just in its ability to trigger business action: it is a real opportunity for businesses to transform themselves to a more meaningful, consequential sustainability approach. A report co-authored by the WEF and Boston Consulting Group, ‘Net-Zero Challenge: The supply chain opportunity’, found that the supply chains of just eight sectors, including food, construction, and fashion, account for more than 50% of global emissions. It also found that 40% of the emissions could be abated with already-available measures like circular manufacturing and renewable energy. Even achieving net zero emissions in those supply chains, according to the report’s investigations, would only raise costs for end-consumers by 1%-4% on average.


Lean for the modern company

A strong esprit de corps among team members has also long been critical to support healthy growth and the creation of synergistic value, and the book emphasizes the importance of building a healthy culture able to support lean processes and outcomes. This section includes clever material on nurturing a culture of experimentation and discovery, and validating trust by constantly raising the bar on deliverables and expectations. May and Dominguez ground their principles in the core lean ideal of starting with value and working backward—focusing obsessively on improving operations to seamlessly deliver for the customer what they call the “Job to be Done.” The authors’ material on accelerating value creation recapitulates this goal and reminds readers to be vigilant about combating the inevitable waste generated by successful companies. As a writer about lean for nearly two decades, I’ve often been frustrated by misrepresentations of this dynamic system by management gurus who tout only thin-sliced elements of it.


4 Key Observability Best Practices

For cost reasons, becoming comfortable with tracking the current telemetry footprint and reviewing options for tuning — like dropping data, aggregating or filtering — can help your organization better monitor costs and platform adoption proactively. The ability to track telemetry volume by type (metrics, logs, traces or events) and by team can help define and delegate cost-efficiency initiatives. Once you’ve gotten a handle on how much telemetry you’re emitting and what it’s costing you, consider tracking the daily and monthly active users. This can help you pinpoint which engineers need training on the platform. ... Teams need better investigations. One way to ensure a smoother remediation process is through an organized process like following breadcrumbs rather than having 10 different bookmark links and a mental map of what data lives where. One way to do this is by understanding what telemetry your system emits from metrics, logs and traces and pinpointing the potential duplication or better sources of data.


Software Engineering in the Age of Climate Change: A Testing Perspective

Regression testing confirms that new code does not break existing functionality. Preventing regressions reduces the need for repeated testing and bug fixes, optimizing the software development lifecycle and minimizing unnecessary computational resources. ... Online education platforms introduce new features to enhance user experiences. Regression testing ensures these changes do not disrupt existing lessons or content delivery. By maintaining stability, energy is saved by minimizing the need for post-deployment fixes. Suppose a telecommunications company is rolling out a software update for its network infrastructure to improve data transmission efficiency and reduce latency. The update includes changes to the routing algorithms used to direct data traffic across the network. While the primary goal is to enhance network performance, there is a potential risk of introducing regressions that could disrupt existing services. Before deploying the software update to the entire network, the telecommunications company conducts thorough regression testing.


How to make your developer organization more efficient

Automating manual tasks and repetitive processes is crucial for increasing developer efficiency. “Employing automation for tasks that many engineers face throughout their SDLC helps to shift focus towards human value-add activities. This also increases overall delivery throughput, with higher confidence in our development lifecycle, and produces consistent processes across teams that would otherwise be handled one-off and uniquely” said Joe Mills. Developers can engage a team of automation experts to assess certain processes and tasks and help uncover automation opportunities. The team uses a hub-and-spoke model to scale their efforts across development teams at Discover and can help teams with robotic process automation, business automation, or code automation. ... In addition to these initiatives, engineers at Discover adhere to a set of practices, internally called CraftWorx, that define and direct the agile development process. Aligning engineers across these practices reduces friction because engineers and developers are following the same development practices.



Quote for the day:

"A leader takes people where they would never go on their own." -- Hans Finzel

Daily Tech Digest - August 31, 2023

Most hyped network technologies and how to deal with them

Hype four is zero trust. Security is justifiably hot, and at the same time there never seems to be an end to the new notions that come along. Zero trust, according to technologists, is problematic not only because senior management tends to jump on it without thinking, but because there isn’t even a consistent view of the technology being presented. “Trust-washing,” said one professional, “has taken over my security meetings,” meaning that they’re spending too much time addressing all the claims vendors are making. Technologists say the best approach to a project to address this hype issue starts by redefining “zero” trust as “explicit trust” and making it clear that this means that it will be necessary to add tools and processes to validate users, resources, and their relationships. This will mean impacting the line organizations whose users and applications are being protected, in that they will have to define and take the necessary steps to establish trust. Zero-trust enhancements are best implemented through a vendor already established in the security or network connectivity space, so start by reviewing each of the tools available from these incumbent vendors.


Don’t Build Microservices, Pursue Loose Coupling

While it is true that microservices strategies do support loose coupling, they’re not the only way. Simpler architectural strategies can afford smaller or newer projects the benefits of loose coupling in a more sustainable way, generating less overhead than building up microservices-focused infrastructure. Architectural choices are as much about the human component of building and operating software systems as they are about technical concerns like scalability and performance. And the human component is where microservices can fall short. When designing a system, one should distinguish between intentional complexity (where a complex problem rightfully demands a complex solution) and unintentional complexity (where an overly complex solution creates unnecessary challenges). It’s true that firms like Netflix have greatly benefited from microservices-based architectures with intentional complexity. But an up-and-coming startup is not Netflix, and trying to follow in the streaming titan’s footsteps can introduce a great degree of unintentional complexity.


MPs say UK at real risk of falling behind on AI regulation

Noting current trialogue negotiations taking place in the EU over its forthcoming AI Act and the Biden administration's voluntary agreements with major tech firms over AI safety, the SITC chair Greg Clark told reporters at a briefing that time is running out for the government to establish its own AI-related powers and oversight mechanisms. “If there isn’t even quite a targeted and minimal enabling legislation in this session, in other words in the next few months, then the reality [for the introduction of UK AI legislation] is probably going to be 2025,” he said, adding it would be “galling” if the chance to enact new legislation was not taken simply because “we are timed out”. “If the government’s ambitions are to be realised and its approach is to go beyond talks, it may well need to move with greater urgency in enacting the legislative powers it says will be needed.” He further added that any legislation would need to be attuned to the 12 AI governance challenges laid out in the committee’s report, which relate to various competition, accountability and social issues associated with AI’s operation.


The Agile Architect: Mastering Architectural Observability To Slay Technical Debt

Architectural observability requires two other key phases: analysis and observation. The former provides another layer of a deeper understanding of the software architecture, while the latter maintains an updated system picture. These intertwined phases, reflecting Agile methodologies' adaptive nature, foster effective system management. ... The cyclic 'analyzing-observing' process starts with a deep dive into the nitty-gritty of the software architecture. By analyzing the information gathered about the application, we can identify elements like domains within the app, unnecessary code, or problematic classes. Using methodical exploration helps architects simplify their applications and better understand their static and dynamic behavior. The 'observation' phase, like a persistent scout, keeps an eye on architectural drift and changes, helping architects identify problems early and stay up-to-date with the current architectural state. In turn, this information feeds back into further analysis, refining the understanding of the system and its dynamics.


Operation 'Duck Hunt' Dismantles Qakbot

The FBI dubbed the operation behind the takedown "Duck Hunt," a play on the Qakbot moniker. The operation is "the most significant technological and financial operation ever led by the Department of Justice against a botnet," said United States Attorney Martin Estrada of the Central District of California. International partners in the investigation include France, Germany, the Netherlands, the United Kingdom, Romania and Latvia. "Almost every country in the world was affected by Qakbot, either through direct infected victims or victims attacked through the botnet," said senior FBI and DOJ officials. Officials said Qakbot spread primarily through email phishing campaigns, and FBI probes revealed Qakbot infrastructure and victim computers had spread around the world. Qakbot played a role in approximately 40 different ransomware attacks over the past 18 months that caused $58 million in losses, Estrada said. "You can imagine that the losses have been many millions more through the life of the Qakbot," which cyber defenders first detected in 2008, Estrada added. "Today, all that ends," he said.


How CISOs can shift from application security to product security

The fact that product security has worked its way onto enterprise organizational charts is not a repudiation of traditional application security testing, just an acknowledgement that modern software delivery needs a different set of eyes beyond the ones trained on the microscope of appsec testing. As technology leaders have recognized that applications don’t operate in a vacuum, product security has become the go-to team to help watch the gaps between individual apps. Members of this team also serve as security advocates who can help instill security fundamentals into the repeatable development processes and ‘software factory’ that produces all the code. The emergence of product security is analogous to the addition of site reliability engineering early in the DevOps movement, says Scott Gerlach, co-founder and CSO at API security testing firm StackHawk. “As software was delivered more rapidly, reliability needed to be engineered into the product from inception through delivery. Today, security teams typically have minimal interactions with software during development. 


CIOs are worried about the informal rise of generative AI in the enterprise

What can CISOs and corporate security experts do to put some sort of limits on this AI outbreak? One executive said that it’s essential to toughen up basic security measures like “a combination of access control, CASB/proxy/application firewalls/SASE, data protection, and data loss protection.” Another CIO pointed to reading and implementing some of the concrete steps offered by the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework report. Senior leaders must recognize that risk is inherent in generative AI usage in the enterprise, and proper risk mitigation procedures are likely to evolve. Still, another respondent mentioned that in their company, generative AI usage policies have been incorporated into employee training modules, and that policy is straightforward to access and read. The person added, “In every vendor/client relationship we secure with GenAI providers, we ensure that the terms of the service have explicit language about the data and content we use as input not being folded into the training foundation of the 3rd party service.”


Google’s Duet AI now available for Workspace enterprise customers

The launch of Duet AI means Google has beaten Microsoft to market with genAI tools for its office software suite. Microsoft is currently trialing its own Copilot AI assistant for Microsoft 365 applications such as Word, Excel and Teams. The Microsoft 365 Copilot, based on OpenAI’s ChatGPT, will also cost $30 per user each month when it’s made available later this year or in early 2024. “Google's choice to price Duet at $30 is surprising, given that it's the same price as Microsoft Copilot,” said J. P. Gownder, vice president and principal analyst on Forrester's Future of Work team. “Both offerings promise to improve employee productivity, but Google Workspace is positioned as a lower-cost alternative to Microsoft 365 in the first place. Its products contain perhaps 70% to 80% of the features of their counterparts in the Microsoft 365 office programs suite.” However, as with Microsoft’s genAI feature, Gownder expects Duet will provide customers with improvements around productivity and employee experience, even if it’s too early to make firm judgements on either product.


Empowering Female Cybersecurity Talent in the Supply Chain

While young women and other minority individuals today are taught they can have a successful career in any industry, having the right support from educators, peers, and co-workers are key factors in the eventual decision to enter – and stay in – technical fields. Around 74% of middle school females are interested in STEM subjects. Yet, when they reach high school, interest drops, further proving the need for unwavering awareness efforts and support at an early age. According to a recent report from the NSF's NCSES, more women worked in STEM jobs over the past decade compared to previous years, proving progress in the right direction. Despite this increase, a lack of external support and awareness leaves adolescents exploring different paths. Since many decide their majors as early as age 18, promoting technical roles in college can even be considered too late. Therefore, it’s imperative that leaders encourage young talent by communicating and rewarding the skillsets needed to hold these roles and showcase the career paths available.


Machine Learning Use Cases for Data Management

In the financial services sector, ML algorithms in fraud detection and risk assessment are expected to enhance security measures and mitigate potential risks. By leveraging advanced Data Management techniques, ML algorithms can analyze vast amounts of financial data to identify patterns and anomalies that may indicate fraudulent activities. These algorithms can adapt and learn from new emerging fraud patterns, enabling financial institutions to take immediate action. Additionally, ML algorithms can aid in risk assessment by analyzing historical data, market trends, and customer behavior to predict potential risks accurately. ... In the manufacturing sector, ML is revolutionizing quality control and predictive maintenance processes. ML algorithms can analyze vast amounts of data collected from sensors, machines, and production lines to identify patterns and anomalies. This enables manufacturers to detect defects in real-time, ensuring product quality while minimizing waste and rework. Moreover, ML algorithms can predict equipment failures by analyzing historical data on machine performance.



Quote for the day:

"The leader has to be practical and a realist, yet must talk the language of the visionary and the idealist." -- Eric Hoffer