Daily Tech Digest - June 24, 2023

Technologists Want to Develop Software Sustainably

Ingrid Olson, principal with application security at Coalfire, explains organizations such as the Green Software Foundation can help educate and unite developers interested in learning more about green development practices. "In the right economy developers can also 'vote with their feet' by seeking out employment with companies that advocate for more environmentally sustainable development practices," she says. She adds everyone is a stakeholder either directly or indirectly in the effort to create more sustainable software development practices. "In addition to the environmental benefits of green software development, in the long term there are also a lot of potential financial benefits from development practices that result in reduced carbon emissions," Olson says. ... Tegan Keele, KPMG US climate and data technology leader, explains software that includes complex computational models, like AI and ML, typically require more computing "horsepower" to develop, test and run. "The more intensive that process, the greater proportion of a data center’s computing power that process takes up," he says.


Shift in Sprint Review Mindset: from Reporting to Inclusive Ideation

It's useful to remember that our brains are wired to expect things based on what we've experienced before: it's extremely helpful when the situation is similar, but it can also prevent us from being open to new but important nuances. The new corporate language, certain experiences and time spirits should be learned. There may be times when your attention can make a big difference. For example, you may share a seemingly mundane suggestion in a meeting, only to notice a distinct shift in the atmosphere of the room. It's like walking into a bad neighborhood in a new country and instinctively feeling like an outsider. The level of danger is different, of course, but in both cases it's important to investigate and learn from these new experiences. Therefore, like a seasoned traveler, change agents should be extremely open-minded and strive to understand the culture they're entering, while not blending in and staying true to their values and beliefs. It's important to understand the value and function of the current Sprint Review processes, while resisting "it won't work in our environment" and other skepticism. 


The Rise of Developer Native Dynamic Observability

Dynamic observability comes to address and solve these challenges. Basically, as opposed to static logging, with dynamic observability developers enjoy end-to-end observability across app deployments and environments directly from their IDEs. This translates into reduced MTTR, enhanced developer productivity and overall cost optimization since developers debug and consume logs and telemetry data where and when they need it rather than monitoring everything. Dynamic observability has emerged as a pivotal approach in modern software development, enabling teams to gain deep insights into system behavior and make informed decisions. It goes beyond traditional testing and monitoring methodologies, offering a comprehensive understanding of system patterns, strengths and weaknesses. ... Dynamic observability represents a paradigm shift in software development, enabling developers to gain a detailed understanding of system behavior and make informed decisions. Using tools and practices that go beyond traditional testing, it empowers teams to create robust and reliable systems.


Monolithic or microservices: which architecture best suits your business?

In the monolithic world, you’re dealing with one single codebase. The simplicity of this model makes it a great choice for small-to-medium-sized applications. But, as the business grows, so do the challenges. Every change, no matter how small, requires a full redeployment. Scaling particular functions can turn into a headache, with these slowing down your go-to-market speed and impacting your responsiveness. On the other hand, the microservices approach works like a small, self-contained team that collaborates, but can also work independently. This architecture gives you the flexibility of scaling, updating, and deploying each service independently — great for scalability, but with added complexity. Imagine trying to coordinate different teams spread out around the world, each with its own time zone and function. Managing microservices is a bit like that. Choosing the right architectural style isn’t just about handling the technology stack, it’s about aligning your tech with your business strategy. 


Microsoft slammed for hitting European cloud users with ‘unfair, additional’ charges

The issue can be traced back to a Microsoft licensing-related policy change in 2019 that stopped customers from deploying on-premise Office 365 licenses on third-party infrastructure. According to the report, this move may have generated an estimated €560m in first-year license repurchase costs for European enterprises. “An additional surcharge of €1bn, relating to licensing surcharges imposed on non-Azure deployments of SQL Server, may further be attributed to the policy change,” said the report. “If this Microsoft tax equals €1bn per year for just one product among potentially hundreds, the overall cost to the European economy as it looks to move enterprise and productivity computing to the cloud must be estimated to be significantly higher.” It goes on to make the point that this additional spend is money that could be used to accelerate the pace of digital transformation for European enterprises and, in the case of the public sector, this is taxpayers’ money that is being “unfairly diverted to already-dominant players”.


Making Better Data-Informed Decisions to Navigate Disruptions

Traditionally, companies have managed risks across domains that, while often volatile, were nevertheless limited in scope. Market dynamics, disruptive technology, and regulatory risks can change dramatically quarter to quarter, for example, but business leaders often rely on several key assumptions about broader global trends. However, the events of recent years have made manifest that business and political leaders can no longer rely on these assumptions. A lingering pandemic and its impacts have drawn into question traditional supply chain and risk management approaches. Social and political concerns have introduced new regulatory risks to businesses across industries. Global economic uncertainty lingers. Climatic risks require business to reconsider both their current supply chain strategies and long-term geographic footprints. Finally, geopolitical risks—including war and sanctions —and the uncertainty of some international agreements have upended traditional assumptions about the security of long-term investments


Six skills you need to become an AI prompt engineer

Prompt engineering is fundamentally the creation of interactions with generative AI tools. Those interactions may be conversational, as you've undoubtedly seen (and used) with ChatGPT. But they can also be programmatic, with prompts embedded in code, the rough equivalent of modern-day API calls; except, you're not simply calling a routine in a library, you're using a routine in a library to talk to a vast large language model. Before we talk about specific skills that will prove useful in landing that prompt engineering gig, let's talk about one characteristic you'll need to make it all work: a willingness to learn. While AI has been with us for decades, the surge in demand for generative AI skills is new. The field is moving very quickly, with new breakthroughs, products, techniques, and approaches appearing constantly. To keep up, you must be more than willing to learn -- you must be voracious in learning, looking for, studying, and absorbing everything you possibly can find. If you keep up with your learning, then you'll be prepared to grow in this career.


Author Talks: Create your ‘reinvention road map’ in four easy steps

The first step, the search, is fascinating. This is when you are collecting information, collecting experiences. What’s key about it is that most people don’t realize it’s unintentional. This is the stuff that is going to take you to your transition, to your reinvention, but you don’t know it at the time. For career people, maybe it’s a side hustle or just a random interest, a hobby. That’s the search. The second step is the struggle. The struggle is where you have disconnected, or you’re starting to disconnect, from that previous identity, but you have not figured out where you are going. It’s really uncomfortable, and we don’t like to talk about it. When we tell these reinvention stories, we tend to skip over this part. But it’s incredibly important, as the struggle is where all the important work gets done. The struggle often doesn’t end until you hit the third step, the stop. The stop might be something that you initiate: for example, I quit my job. But it may be something imposed on you—for example, you lose your job. Or it could be a trauma, like a divorce or an illness in the family or a pandemic. 


6 strategic imperatives for your next data strategy

In many industries, depending on how your customers consume and extract value from your products and services, your data can be monetized across multiple layers in the tech stack, from raw data itself and data with various forms of post-processing applied for added insights, to data consumed via visualization and analytics tools, and data consumed via industry applications such as digital twins. In the architecture, engineering and construction (AEC) industry, for example, these scenarios might include geospatial data like aerial imagery offered directly via an ecommerce-enabled website, drone-based photogrammetry of roadways and bridges with AI-enabled defect analysis, like Manam, traffic congestion data visualized via a GIS platform, like Urban SDK, or EV charging data provided by a live digital twin. ... Look for opportunities to combine your own data with third-party data, including open data, where applicable, for added value and for tools that support data ingestion, transformation, and integration to feed into a variety of analysis tools including GIS and digital twins.


China-sponsored APT group targets government ministries in the Americas

The campaign ran from late 2022 into early 2023. It also targeted a government finance department in a country in the Americas and a corporation that sells products in Central and South America. There was also one victim based in a European country, according to the report. ... Graphican can create an interactive command line that can be controlled from the server, download files to the host, and set up covert processes to harvest data of interest. This technique was used earlier by the Russian state-sponsored APT group Swallowtail in a campaign in 2022 to deliver the Graphite malware. “Once a technique is used by one threat actor, we often see other groups follow suit, so it will be interesting to see if this technique is something we see being adopted more widely by other APT groups and cybercriminals,” Symantec said in its report. Flea has been in operation since at least 2004. Initially, it used email as the initial infection vector, but there have also been reports of it exploiting public-facing applications, as well as using VPNs, to gain initial access to victim networks. 



Quote for the day:

"A sense of humor is part of the art of leadership, of getting along with people, of getting things done." -- Dwight D. Eisenhower

Daily Tech Digest - June 22, 2023

Mass adoption of generative AI tools is derailing one very important factor, says MIT

Many companies "were caught off guard by the spread of shadow AI use across the enterprise," Renieris and her co-authors observe. What's more, the rapid pace of AI advancements "is making it harder to use AI responsibly and is putting pressure on responsible AI programs to keep up." They warn the risks that come from ever-rising shadow AI are increasing, too. For example, companies' growing dependence on a burgeoning supply of third-party AI tools, along with the rapid adoption of generative AI -- algorithms (such as ChatGPT, Dall-E 2, and Midjourney) that use training data to generate realistic or seemingly factual text, images, or audio -- exposes them to new commercial, legal, and reputational risks that are difficult to track. The researchers refer to the importance of responsible AI, which they define as "a framework with principles, policies, tools, and processes to ensure that AI systems are developed and operated in the service of good for individuals and society while still achieving transformative business impact."


From details to big picture: how to improve security effectiveness

Benjamin Franklin once wrote: “For the want of a nail, the shoe was lost; for the want of a shoe the horse was lost; and for the want of a horse the rider was lost, being overtaken and slain by the enemy, all for the want of care about a horseshoe nail.” It’s a saying with a history that goes back centuries, and it points out how small details can lead to big consequences. In IT security, we face a similar problem. There are so many interlocking parts in today’s IT infrastructure that it’s hard to keep track of all the assets, applications and systems that are in place. At the same time, the tide of new software vulnerabilities released each month can threaten to overwhelm even the best organised security team. However, there is an approach that can solve this problem. Rather than looking at every single issue or new vulnerability that comes in, how can we look for the ones that really matter? ... When you look at the total number of new vulnerabilities that we faced in 2022 – 25,228 according to the CVE list – you might feel nervous, but only 93 vulnerabilities were actually exploited by malware. 


3 downsides of generative AI for cloud operations

While we’re busy putting finops systems in place to monitor and govern cloud costs, we could see a spike in the money spent supporting generative AI systems. What should you do about it? This is a business issue more than a technical one. Companies need to understand how and why cloud spending is occurring and what business benefits are being returned. Then the costs can be included in predefined budgets. This is a hot button for enterprises that have limits on cloud spending. The line-of-business developers would like to leverage generative AI systems, usually for valid business reasons. However, as explained earlier, they cost a ton, and companies need to find either the money, the business justification, or both. In many instances, generative AI is what the cool kids use these days, but it’s often not cost-justifiable. Generative AI is sometimes being used for simple tactical tasks that would be fine with more traditional development approaches. This overapplication of AI has been an ongoing problem since AI was first around; the reality is that this technology is only justifiable for some business problems.


Pros and cons of managed SASE

If a company decides to deploy SASE by going directly through SASE vendors, they’ll have to configure and implement the service themselves, says Gartner’s Forest. “The benefits of a managed service provider are a single source for all setup and management, the ability to redeploy internal resources for other tasks, and the ability to access skills and capabilities that don’t exist internally,” he says. Getting in-house IT staff with the right expertise to handle SASE can be a real challenge, particularly in today’s hiring climate: 76% of IT employers say they’re having difficulty finding the hard and soft skills they need, and one in five organizations globally is having trouble finding skilled tech talent, according to a 2023 survey by ManpowerGroup. The access to outside experts is particularly appealing to companies that don’t have the resources to manage SASE themselves. Managed SASE providers have specialized expertise in deploying and managing SASE infrastructure, says Ilyoskhuja Ikromkhujaev, software engineer at software developer Nipendo. “Which can help ensure that your system is set up correctly and stays up to date with the latest security features and protocols,” he says.


The security interviews: Exploiting AI for good and for bad

AI has moved beyond automation. Looking at large language models, which some industry experts see as representing the tipping point that ultimately leads to wide-scale AI adoption, Heinemeyer believes that an AI capable of writing code offers attackers the opportunity to develop much more bespoke and tailored, sophisticated attacks. Imagine, he says, highly personalised phishing messages that have error-free grammar and no spelling mistakes. For its customers, he says Darktrace uses machine learning to learn what normal looks like in business email data: “We learn exactly how you communicate, what syntax you use in your emails, what attachments you receive, who you talk to, and when this is internal or external.We can detect if somebody sends an email that is unusual for you.” A large language model like ChatGPT reads everything that is on the public internet. The implication is that it will be reading people’s social media profiles, seeing who they interact with, their friends, what they like and do not like. Such AI systems have the ability to truly understand someone, based on the publicly available information that can be gleaned across the web. 


Switching the Blame for a More Enlightened Cybersecurity Paradigm

The “blame the user” mentality is a cognitive bias that ignores the complexities of human-computer interaction. Research in cognitive psychology and human factors engineering has shown that humans are not designed to be perfect digital operators. Mistakes are a natural part of our interaction with systems, especially those that are complex and non-intuitive. Moreover, our susceptibility to scams and manipulation is not just a personal failing, but a product of millennia of evolution. For instance, social engineering attacks exploit our natural tendency to trust and cooperate, which have been crucial to human survival and societal development. To put the onus on the individual is to ignore the broader context. Shifting the blame is an easy way out. It absolves organizations of the responsibility to address systemic issues and allows them to maintain the status quo. This is underpinned by the “just-world hypothesis,” a cognitive bias which propounds that people get what they deserve. When an employee falls for a scam, it's easy to assume that they were careless or ill-prepared.


Standardized information sharing framework 'essential' for improving cyber security

Security experts have called for improvements in how private sector organizations share threat intelligence data with the wider industry. It’s believed that better cross-organizational collaboration would improve cyber resiliency in the face of cyber attacks that continue to rise in frequency and develop ever more sophisticated. “I think this is one of the ways in which the private sector can work with governments around the world, and each other across sectors, industries, and regions,” said Jen Ellis, co-chair at the Institute for Science and Technology’s Ransomware Task Force. Government agencies such as the UK’s Information Commissioner’s Office (ICO) or the US’ Cybersecurity and Infrastructure Security Agency (CISA) enforce strict reporting deadlines around data breaches, but companies often report the minimum required information. The designated cyber security authorities in the UK and US enforce strict reporting deadlines around data breaches and this is seen as a positive step. However, victims often report the minimum required information which in turn reduces other organizations’ ability to learn from, and potentially prevent, follow-on attacks.


Hybrid Microsoft network/cloud legacy settings may impact your future security posture

Often in large organizations, there are users in your network who have the equivalent of Domain administrative rights and are not even aware of this. Your firm may have even inherited the setup of the domain with original accounts and permissions set for a Novell network that was migrated from years before. Often the difference between a firm with better security and one with poor security is having a staff that takes the additional time to test and confirm that there will be no side effects in the network if changes are made. Take the example of unconstrained delegation; this is a setting that many web applications need to function, including those that are internal only to the organization. But this setting can expose the domain to excessive risk. Delegation allows a computer or server to save the Kerberos authentication tickets. Then these saved tickets are used to act on the user’s behalf. Attackers love to grab these tickets, as they can then interact with the server and impersonate the identity and in particular the privileges of those users.


Why we don't have 128-bit CPUs

You might think 128-bit isn't viable because it's difficult or even impossible to do, but that's actually not the case. Lots of parts in processors, CPUs and otherwise, are 128-bit or larger, like memory buses on GPUs and SIMDs on CPUs that enable AVX instructions. We're specifically talking about being able to handle 128-bit integers, and even though 128-bit CPU prototypes have been created in research labs, no company has actually launched a 128-bit CPU. The answer might be anticlimactic: a 128-bit CPU just isn't very useful. A 64-bit CPU can handle over 18 quintillion unique numbers, from 0 to 18,446,744,073,709,551,615. By contrast, a 128-bit CPU would be able to handle over 340 undecillion numbers, and I guarantee you that you have never even seen "undecillion" in your entire life. Finding a use for calculating numbers with that many zeroes is pretty challenging ... Ultimately, the key reason why we don't have 128-bit CPUs is that there's no demand for a 128-bit hardware-software ecosystem. The industry could certainly make it if it wanted to, but it simply doesn't.


Data sovereignty and security driving hybrid IT adoption in Australia

According to Nutanix’s fifth global Enterprise cloud index survey, data sovereignty was the top driver of infrastructure decisions in Australia, with 15% of local respondents citing that as the most important criteria when considering infrastructure investments. Data sovereignty was also one of the top three considerations for over a third (37%) of enterprises in Australia. “Control and security are the biggest factors Australian organisations are weighing up when transforming their IT infrastructure,” said Jim Steed, managing director of Nutanix Australia and New Zealand. “While public cloud was seen as a panacea for many years, it’s becoming increasingly clear that cloud is a tool – not a destination. Some workloads and applications are perfectly suited to a public cloud, but Australian organisations are moving their most sensitive and business-critical workloads back home to their on-premises infrastructure.” According to the study, over half of Australian organisations are planning to repatriate some applications from the public cloud to on-premise datacentres in the next 12 months due to data sovereignty concerns.



Quote for the day:

"Effective team leaders adjust their style to provide what the group can't provide for itself." -- Kenneth Blanchard

Daily Tech Digest - June 21, 2023

India’s digital transformation could be a game-changer for economic development

India currently has a data-fiduciary-centric model. Individuals or small businesses must go to the original keeper of data to access their data. This inhibits the use of data for the financial empowerment of individuals. The current method of storing financial data across institutions and companies is also inefficient, resulting in the use of notarized hard copies, PDFs, screen scraping, password sharing, etc., all of which pose a threat to individual privacy. Accessing and sharing information can be difficult because of the varied formats. This forces individuals and institutions to rely on patchwork solutions. ... AAs can be thought of as traffic police between Financial Information Users (FIUs) and Financial Information Providers (FIPs), with users having complete control over the flow of information. The introduction of AA architecture could revolutionize how financial data is shared, similar to the impact UPI has had on money transfers. The AA ecosystem is cross-sectoral, with customers at the center. AAs provide a secure interface that allows users to consent to share private and sensitive data. This democratizes data use and sharing, enabling FIUs to request users' financial information.


8 ways to detect (and reject) terrible IT consulting advice

Recommendations are great, but they don’t automatically turn into solutions. “Most of the consultant’s dialogue should be repeating back to you the problem they’re solving,” advises Bill Carslay, senior vice president and general manager of professional services at IT support services firm Rimini Street. “The resulting solution should be directly related to the problem as it’s defined in your terms, and should follow the steps and phases your organization is willing to take.” When a consultant grabs onto a common IT challenge and quickly describes how they will solve it, it’s likely the solution won’t fully address the very specific problem an organization may be facing. “Keep in mind that one size doesn’t fit all, and be on the lookout for recommendations that fit or augment the parameters you’ve set,” Carslay suggests. ... When advice lacks logical reasoning, contradicts data, or fails to consider long-term consequences, it’s likely terrible. “A critical mind and rigorous evaluation will help you distinguish the good from the bad,” says Edward Kring, vice president of engineering at software development company Invozone.com.


Three Data Removal Myths That Provide a False Sense of Security

There are many ways to attempt to remove a file -- such as data deletion, wiping, factory reset, reformatting, and file shredding -- but without proper context, these solutions independently can be incomplete. For example, deleting a file and emptying the recycle bin can remove pointers to files containing data but not the data itself. The data is easily recoverable until the data is overwritten. A factory reset removes all used data as it restores a device to factory settings, but not all methodologies used in resets lead to complete erasure, and there’s no way to validate that all data is gone. Data wiping is the process of overwriting data without verification. File shredding destroys data on individual files by overwriting the space with a random pattern of 1s and 0s. Because neither method provides verification that the process was completed successfully across all sectors of the device, they are considered incomplete. Finally, reformatting, which is performed on a working disk drive to eradicate its contents, is another method where most of the data can be recovered with forensics tools available online.


Measuring engineering velocity misses all the value

Story point velocity has become the dominant driver of agile software development lifecycles (SDLCs) with the rise of scrum. How many story points did the team complete this week? How can we get them to deliver more points while still meeting the acceptance criteria? Speed is treated as synonymous with success, and acceleration is hailed as the primary focus of any successful engineering enterprise. Deliver more story points and you’re clearly “doing the thing.” The impulse is not without some logic. From the C-suite perspective, a perfect product that misses its moment on the market isn’t worth much. Sure, it may be full of engineering genius, but if it generates little to no business value, it quickly becomes more “museum relic” than “industry game-changer.” It pays to be first. In fact, one study found that accelerating time to market by just 5% could increase ROI by almost 13%. However, I believe that a simplistic obsession with speed misses several factors critical to optimizing the actual impact of any software solution.


Developers’ Role in Protecting Privacy

Although sharing data has become commonplace in exchange for benefits and value, consumers are becoming more aware of privacy issues. Take the EU’s General Data Protection Regulations (GDPR) as an example. Over the past five years, awareness has more than doubled in notable European markets such as the UK, Spain, Germany, the Netherlands and France. Meanwhile, there is also commercial pressure, as employers rely on developers to innovate to remain profitable. At the same time, customers expect brands to be responsible with their data, and failure to do so at the expense of trying to commercialize a new application could be detrimental. Indeed, while the pandemic may have ushered in significant changes and altered consumers’ attitudes toward data privacy, end users remain unwavering about the importance of security. Maintaining this balancing act is becoming increasingly complex to achieve. However, the question of data privacy is becoming a key business priority, and that means developers have a big opportunity to show their commercial value to their organizations. 


Why CISOs should be concerned about space-based attacks

Making matters worse is the tendency for many satellites to be ‘dual use’ carriers, in that they provide services that are used by both commercial and military clients. As such, “US commercial satellites may be seen as legitimate targets in case they are used in the conflict in Ukraine,” reported the Russian state-owned news agency TASS on October 27, 2022. Speaking before the UN General Assembly’s First Committee, Russian Foreign Ministry official Konstantin Vorontsov threatened that, “Quasi-civil infrastructure may be a legitimate target for a retaliation strike.” This has certainly been true for SpaceX’s Starlink satellite broadband service in Ukraine. "Some Starlink terminals near conflict areas were being jammed for several hours at a time,” SpaceX CEO Elon Musk said in a Twitter message posted on March 5, 2022. “Our latest software update bypasses the jamming. Am curious to see what’s next!” Such threats and actions come as no surprise to Laurent Franck, a satellite consultant and ground systems expert with the Euroconsult Group. Whenever a commercial satellite “can be used on a battlefield and used in a war context, it becomes a target,” he says. 


Who Is Responsible for Identity Threat Detection and Response?

For organizations just starting to develop an ITDR program, Jones recommends they start by conducting a thorough risk assessment to identify critical assets and potential threats. “Assign a dedicated ITDR owner or team responsible for coordinating prevention, detection, and response efforts, and develop a comprehensive ITDR plan that outlines roles, responsibilities, and processes for each stage of the ITDR lifecycle,” he says. He adds it’s important to regularly test and update the ITDR plan, incorporating lessons learned from past incidents and staying informed about the latest threats and technologies. Craig Debban, CISO for QuSecure, explains for a lot of organizations, there is a dependence on a disparate set of systems that are on-prem, in the cloud, or both -- and they are not always well integrated. "User identities are then decentralized since they are replicated in multiple places,” he says. “This diversity leads to gaps in functionality for the end user, negatively impacts operational efficiency, and is often overcome by oversubscribing permissions which impacts overall security and risk across the business.”


You can’t be an averagely talented programmer

In some ways, the level of engineering capability which people need is only going to become higher in terms of writing these AI systems and being able to engineer them. That said, this only applies to the very best programmers. You can’t be an averagely talented programmer anymore. With some of our large operations it’s clear by the way they are adopting automation that we won’t need a large number of developers. We will start having fewer people of that kind. People who actually understand engineering are going to become more in demand, and the people who just operate the technology will be less valuable. ... Right now, the technology industry needs a lot of people. But I see a lot of people who don’t really understand the technology or worse, they are afraid of technology. A lot of people who do not come from a computer science background can be working for tech companies but really are afraid of the technology. That’s not sustainable. Having a genuine interest in technology is, I would say, an important condition to reaching or exceeding your potential in a tech firm. Understand what’s happening in technology and do not be afraid of it.


How to Choose the Right Identity Resolution System

A best-in-class approach to identity resolution enables you to match many identifiers to the same person and then set the priority of matching to control how profiles are stitched together. ... While deterministic identity resolution might seem overly rigorous, it’s actually highly beneficial for personalization. Personalization use cases (sending an email, delivering a recommendation, and so on) require 100% confidence that a user is who you think they are. The only way to guarantee that confidence is through a deterministic identity algorithm. The alternative is simply guesswork and increases the likelihood that your personalization (or lack thereof) will have a detrimental impact on your customer relationships. A deterministic identity resolution solution enables 100% reliable profile unification, honoring the exact first-party data a customer provides to a brand. More importantly, embracing a deterministic approach as the core of your identity strategy will allow you to build high-quality customer profiles that power the personalized experiences customers have come to expect.


How to Become a Business Intelligence Analyst

As much as business intelligence can be about interpersonal action, much of an analyst’s duties are solitary ones, chief among these authoring procedures for data processing and collection. From there on, expect reporting and more reporting, including analytical reports that can be personalized for the needs of stakeholders, highlighting the most departmentally relevant findings. A business intelligence analyst also needs to maintain an active role in the various life cycles of data as it moves throughout the organization. After all, data reports are built upon regularly monitoring the way data is collected, looking at field reports, product summaries from third parties, and even through public record. As a function of this, a BIA may want to continually track burgeoning trends in tech or emerging markets that could potentially offer efficiency or value within the industry and their specific enterprise. Working in concert with specialists in data governance and stewardship, a BIA must oversee the integrity, security, and location of data storage. 



Quote for the day:

"A coach is someone who can give correction without causing resentment." -- John Wooden

Daily Tech Digest - June 20, 2023

How to navigate the co-management conundrum in MSP engagements

Ironically, enterprises can often suppress innovation by using MSPs transactionally. If the enterprise team has active roles in the delivery of services, it can help mitigate against thinking transactionally and foster a more cooperative style from both parties. If the enterprise team behaves transactionally, because they don’t work alongside the MSP but focus only on inputs and outputs or reported results, then, eventually, the MSP team can also tend to behave more transactionally. This places an unwanted governor on good ideas and flexibility from within the established collective resources. This doesn’t mean that there isn’t the need to have a robust management framework, including a statement of work (SOW) where commitments are clearly articulated. However, even if obligations are ultimately with the MSP, co-management of some of the task inputs or signoffs under a SOW can sometimes lead to more pragmatic, dispute-avoiding working practices.


ChatGPT and data protection laws: Compliance challenges for businesses

ChatGPT is not exempt from data protection laws, such as the General Data Protection Regulation (GDPR), the Health Insurance Portability and Accountability Act (HIPAA), the Payment Card Industry Data Security Standard (PCI DSS), and the Consumer Privacy Protection Act (CPPA). Many data protection laws require explicit user consent for the collection and use of personal data. ... By utilizing ChatGPT and sharing personal information with a third-party organization like OpenAI, businesses relinquish control over how that data is stored and used. This lack of control increases the risk of non-compliance with consent requirements and exposes businesses to regulatory penalties and legal consequences. Additionally, data subjects have the right to request the erasure of their personal data under the GDPR’s “right to be forgotten.” When using ChatGPT without the proper safeguards in place, businesses lose control of their information and no longer have mechanisms in place to promptly and thoroughly respond to such requests and delete any personal data associated with the data subject.


Navigating Cloud Costs and Egress: Insights on Enterprise Cloud Conversations

One of the things that we’ve seen at the enterprise scale is not just cloud egress cost, but the combination of cloud spend and being able to predict spend has been a constant topic of conversation. With the economic downturn, one of the things that we’re seeing is definitely more control over where money is being spent. I wouldn’t say it’s specifically about egress costs. ... The point that I’m trying to make is it kind of goes both ways. Some businesses extended the effect of the economic downturn – and just looking at the trend over a longer period of time, not just now in the last one or two years – is that the more sophisticated the organization is in terms of their capability of operating multiple environments, like an on-prem and the cloud or two clouds, the more likely they are to not buy into the “all-in” cloud. ... A lot of times what we heard from our clients was “I want to be on a cloud. On-prem data centers are done.” But I think about two or three years back is when we saw a wave of conversations in between. [They said] “Okay, I realize that all-in on cloud is not going to be my future.”


Hijacked S3 buckets used in attacks on npm packages

This latest threat is part of a growing trend of groups looking at the software supply chain as an easy way to deploy their malware and quickly have it reach a broad base of potential victims. Through attacks on npm and other repositories like GitHub, Python Package Index (PyPI), and RubyGems, miscreants look to place their malicious code in packages that are then downloaded by developers and used in their applications. In this case, they found their way in via the abandoned S3 buckets, part of AWS object storage services that enable organizations to store and retrieve huge amounts of data – files, documents, and images, among other digital content – in the cloud. They're accessed via unique URLs and used for such jobs as hosting websites and backing up data. The bignum package used node-gyp, a command-line tool written in Node.js, for downloading a binary file that initially was hosted on a S3 bucket. If the bucket couldn't be accessed, the package was prompted to look for the binary locally. "However, an unidentified attacker noticed the sudden abandonment of a once-active AWS bucket," Nachshon wrote.


Ending the ‘forever war’ against shadow IT

First, CIOs should establish a quick-reaction team (QRT) that deals only with these small projects that user departments are looking to achieve — especially when it comes to leveraging AI. The QRT needs to be an elite group within IT comprising members who understand the risks of data manipulation, are well versed in security pitfalls, and follow developments in AI enough to know its opportunities and pitfalls. It would be the mission of this group to analyze the requirements and assure that data access is secure and that the user understands the nature of the data being accessed. The QRT would also need to analyze the parameters of the work to be done to assure that the results are not already available from another existing source. They would also determine whether the software is compatible with the existing corporate network. This becomes even more critical if, at some point, the company wishes to scale the application to serve the entire corporation. Second, the shadow IT policy must be understood and enforced by the IT steering committee. 


Your AI coding assistant is a hot mess

As testified by Reeve’s wasted hours of bug-hunting, AI tools certainly aren’t foolproof. They’re often trained on open-source code, which frequently contains bugs – mistakes that the assistant is prone to replicating. They’re also notoriously prone to wild delusions, a fact, says Desrosiers, that cybercriminals can use to their advantage. AI coding assistants are liable to occasionally make up the existence of entire coding libraries. “Malicious actors can detect these hallucinations and launch malicious libraries with these names,” he says, “putting at risk people who let these hallucinated libraries execute in their production environment.” Careful oversight, says Desrosiers, is the only solution. That, too, can be facilitated by AI. “To de-risk this and other potential issues [at Visceral], we build single-purpose autonomous coding assistants to monitor for such threats,” says Desrosiers. David Mertz says it’s always important to not be too trusting. “From a security perspective, you just can’t trust code,” says the author and long-time Python programmer. 


Apple beefs up enterprise identity, device management

It’s important to note that account driven user enrollment was largely designed as a way for users to enroll their personal devices into MDM, while corporate devices are typically managed with a more traditional profile-based enrollment that gives IT more access and management options. Apple is now offering account driven device enrollment that offers added capabilities for IT with a user experience similar to account-driven user enrollment. ... Along with improving the enrollment options, Managed Apple IDs will get more management capabilities. There are two major additions. The first is to control which types of managed devices a user is allowed to access: any device regardless of ownership, only managed devices enrolled via MDM, or only devices that are Supervised. Supervised devices are company-owned and have stringent management controls. The next biggest of these features is the ability to control which iCloud services a user can access on a managed device. Each sync service can be enabled or disabled for a user’s Managed Apple ID. 


Prime minister Rishi Sunak faces pressure from banks to force tech firms to pay for online fraud

In response to TSB CEO’s letter last week, a Meta spokesperson said in a statement: “This is an industry-wide issue and scammers are using increasingly sophisticated methods to defraud people in a range of ways, including email, SMS and offline. We don’t want anyone to fall victim to these criminals, which is why our platforms already have systems to block scams, financial services advertisers now have to be FCA authorised to target UK users and we run consumer awareness campaigns on how to spot fraudulent behaviour. People can also report this content in a few simple clicks and we work with the police to support their investigations.” But, in the letter to Sunak, banks said they want the tech companies to stop fraud on their platforms and to contribute to refunds for victims. They also called for a public register showing the failure of tech giants to stop scams. The letter warned that the high level of fraud was “having a material impact on how attractive the wider UK financial sector is perceived by inward investors, which as we know, is critical for the health of the City of London and wider UK economy”.


Why assessing third parties for security risk is still an unsolved problem

The challenge that TPRM companies have is rather simple: Provide a mechanism for companies that do business with other companies to evaluate the risk that their vendors present to them, from a cybersecurity perspective. SecurityScorecard and its primary competitor, BitSight, use a similar methodology: Create a risk score (sort of like your credit score), evaluate companies, and score them. ... The credit reporting agencies, for better or worse, have much more data than the TPRM scoring companies. They’re embedded throughout our financial system, collecting a lot of information that shouldn’t be publicly available. The TPRM scoring companies, on the other hand, are doing the equivalent of drive-by appraisals. They look at the outside of businesses on the internet and decide how reputable they are based on their external appearances. Of course, certain business types will look more secure than others. The alternative to TPRM scoring is, sadly, the TPRM questionnaire industry, which is only marginally less unhelpful. This is an industry focused on shipping massive questionnaires to vendors, which take huge efforts to fill out.


Debugging Production: eBPF Chaos

Tools and platforms based on eBPF provide great insights, and help debugging production incidents. These tools and platforms will need to prove their strengths and unveil their weaknesses, for example, by attempting to break or attack the infrastructure environments and observe the tool/platform behavior. At a first glance, let’s focus on Observability and chaos engineering. The Golden Signals (Latency, Traffic, Errors, Saturation) can be verified using existing chaos experiments that inject CPU/Memory stress tests, TCP delays, DNS random responses, etc. ... Continuous Profiling with Parca uses eBPF to auto-instrument code, so that developers don’t need to modify the code to add profiling calls, helping them to focus. The Parca agent generates profiling data insights into callstacks, function call times, and generally helps to identify performance bottlenecks in applications. Adding CPU/Memory stress tests influences the application behavior, can unveil race conditions and deadlocks, and helps to get an idea of what we are actually trying to optimize.



Quote for the day:

"Uncertainty is a permanent part of the leadership landscape. It never goes away." -- Andy Stanley

Daily Tech Digest - June 19, 2023

Finding the Nirvana of information access control or something like it

In the mythical land of Nirvana, where everything is perfect, CISOs would have all the resources they needed to protect corporate information. The harsh reality, which each CISO experiences on the daily, is that few entities have unlimited resources. Indeed, in many entities when the cost-cutting arrives, it is not unusual for security programs that have not (so far) positioned themselves as a key ingredient in revenue preservation to be thrown by the wayside — if you ever needed motivation to exercise access control to information, there you have it. ... For those who thought they were finished with Boolean logic in secondary school, its back — and attribute-based access control (ABAC) is a prime example of the practicality of utilizing the logic in decision trees to determine access permission. The adoption of ABAC allows access to protected information to be “hyper-granular.” An individual’s access may be initially defined by one’s role and certainly fall within the established policies. 


Goodbyes are difficult, IT offboarding processes make them harder

To ensure that the business continues even though the employee is gone, stale accounts are created with grace periods during which the employee’s credentials can still be used to access the organization’s networks. This is great for retaining the knowledge this employee accumulated and ensuring that their replacement is well-briefed, but since the employee is gone, nobody will remember to monitor their account, as malicious actors will soon notice. This employee may also have been forwarding emails to their personal email account or accessing their work email from personal devices for business purposes, making it easier for hackers to obtain sensitive company data and impossible for the organization to know. Existing offboarding processes may frustrate business executives due to their rigidity – and they aren’t alone in their annoyance. What’s bad for security is also, inevitably, bad for business. Security teams today must manually ensure that all access privileges, including access to various systems, applications, databases and physical facilities, be promptly terminated.

Leaders are made, not born: Although this is technically correct, which is why we rarely see 5 year olds running companies or countries (though, in fairness, the adults that do often fail to provide convincing signs of superior emotional or intellectual maturity), people’s potential for leadership can be detected at a very young age. Furthermore, the dispositional enablers that increase people’s talent for leadership have a clear biological and genetic basis. ... The best leaders are confident: Not true. Although confidence does predict whether someone is picked for a leadership role, once you account for competence, expertise, intelligence, and relevant personality traits, such as curiosity, empathy, and drive, confidence is mostly irrelevant. And yet, our failure to focus on competence rather than confidence, and our lazy tendency to select leaders on style rather than substance (such as during presidential debates, job interviews, and short-term in person interactions), contributes to most of the leadership problems described in point 1. Note that when leaders have too much confidence they will underestimate their flaws and limitations, putting themselves and others at risk.


How Organizations Can Create Successful Process Automation Strategies

Organizations can promote more collaboration by adopting a modified “Center of Excellence” (CoE) approach. In some companies, that might mean assembling a community devoted to process automation tasks and strategies, in which practitioners can share best practices and ask questions of one another. The CoE should help members from business and IT teams work together better by coordinating tasks, avoiding reinventing projects from scratch, and generally empowering them to drive continuous improvement together. Some organizations may want to create a central focus on process automation without using the actual CoE term. The terminology itself carries some legacy baggage from centralized Business Process Management (BPM) software. Some relied on a centralized approach for their CoE, counting on one team to implement process automation for the entire organization. That approach often led to bottlenecks for both developers and a line of business leaders, giving the CoE a bad reputation with few demonstrable results.


8 habits of highly secure remote workers

By working in a public place you are exposing yourself to serious cybersecurity risks. The first, and most direct one is over-the-shoulder attacks, also known as shoulder surfing. All this takes is for an observant, determined hacker to be sitting in the same space as you paying close attention to your every move. ... "As you use public Wi-Fi, you are exposing your laptop or your device to the same network somebody else can log on to so that means they can actually peruse through your network, depending on the security of the local network on your laptop," says Gartner VP Analyst, Patrick Hevesi. Doing work in a public space while also not using public Wi-FI may seem like a paradox, but there are simple and secure solutions. The first is using a VPN when accessing corporate information in public. ... "Your security is as good as your password, because that's the first first line of defense," says Shah. "You want to make sure that you have a good strong password, and also don't use the same password for all the other sites you may be accessing."


Multicloud deployments don't have to be so complicated

The solution to these problems is not scrapping a complex cloud deployment. Indeed, considering the advantages that multicloud can bring (cost savings and the ability to leverage best-of-breed solutions), it’s often the right choice. What gets enterprises in trouble is the lack of an actual plan that states where and how they will store, secure, access, manage, and use all business data no matter where it resides. It’s not enough to push inventory data to a single cloud platform and expect efficiencies. We’re only considering data complexity here; other issues also exist, including access to application functions or services and securing all systems across all platforms. Data is typically where enterprises see the problems first, but the other matters will have to be addressed as well. A solid plan tells a complete data access story and includes data virtualization services that can make complex data deployments more usable by business users and applications. It also enables data security and compliance using a software layer that can reduce complexity with abstraction and automation. Simple data storage is only a tiny part of the solution you need to consider.


E-Commerce Firms Are Top Targets for API, Web Apps Attacks

Attack vectors, such as server-side template injection, server-side request forgery and server-side code injection, have also become popular and may lead to data exfiltration and remote code execution. "This, in turn, may be playing a role in preventing online sales and damaging a company's reputation," the researchers said, citing an Arcserve survey in which 60% of consumers said they wouldn't buy from a website that had been breached in the previous 12 months. SSTI is a hacker favorite for zero-day attacks. Its use is well-documented in "some of the most significant vulnerabilities in recent years, including Log4j," the researchers said. Hackers mainly targeted commerce companies with Log4j, and 58% of all exploitation attempts happened in the space. The Hafnium criminal group popularized SSRFs, which they used to attack Microsoft's Exchange Servers and reportedly launched a supply chain cyberattack that affected 60,000 organizations, including commerce. Hafnium used the SSRF vulnerability to run commands to the web servers, according to the report.


It’s going to take AI to power AI

AI in the datacentre has the ability to act as a pair of eyes, keeping a keen watch on every aspect of the facility to detect and prevent threats. Analysing data from sources such as online access logs and network traffic would allow AI systems to watch for and alert organisations to cyber breaches in seconds. Further, we’re heading in the direction where AI-powered sensors could apply human temperature checks and facial recognition to monitor for physical intrusions. Ultimately, AI will have the opportunity to tune datacentres to operate like well-oiled machines, making sure all components work in harmony to deliver the highest level of performance in our AI-hungry world – a world pressurised by a cost-of-energy crisis and expanding cyber security threats. While the reality is more nuanced, put plainly, it is going to take AI to power AI. In fact, Gartner estimates that half of all cloud datacentres will use AI by 2025. It’s going to be a productive couple of years for industry developing one of the fastest-growing technologies, rolling it out, and doing so in a way that ensures trust.


Beyond ChatGPT: What is the Business Value of Generative Artificial Intelligence?

Beyond the attraction to the technology itself, generative AI has huge potential business value. Regardless of the processes, professions, or sectors of activity involved, the common thread among artificial intelligence projects is their shared objective of enabling, expediting, or enhancing human actions, either by facilitating or accelerating them. The use of AI usually starts with a question, or a problem. This is immediately followed by the analysis of a significant amount of exogenous information or endogenous information, with the aim of obtaining an answer to the question or problem through the creation of information useful to humans: aiding decision-making, detecting an anomaly, analyzing a hand-drawn schema, prioritizing problems to be solved, etc. More broadly, the automated generation of information makes it easier and safer to streamline some processes, such as moving from an idea to a first version by allowing for quicker validation or failure recognition, A/B testing, and simplified re-experimentation. 


Even in cloud repatriation there is no escaping hyperscalers

Hansson’s blog sparked pushback from cloud advocates like TelcoDR CEO Danielle Royston. She contended in an interview with Silverlinings that those using the cloud aren’t just paying for servers, but also for the proprietary tools the different cloud giants provide, the salaries they pay their top-tier developer talent, the hardware upgrades they make available to cloud users and the built-in security they offer. For those who use the cloud to its full potential, she said, the cloud is “the gift that keeps on giving.” Not only that, but those looking to repatriate workloads will need to invest significant time and money to transition back and hire more staff to develop new applications and manage the on-prem servers, she added. ... So, who’s right? Well, it seems the answer will vary by company and even by application. Pichai explained the cloud is the ideal environment for a small handful of workloads, namely “vanilla applications” which incorporate only standard rather than specialized features and “spikey applications” which need to scale on demand to accommodate irregular patterns of usage.



Quote for the day:

"To be an enduring, great company, you have to build a mechanism for preventing or solving problems that will long outlast any one individual leader" -- Howard Schultz