Daily Tech Digest - January 26, 2023

Bringing IT and Business Closer Together: Aiming for Business Intimacy

“Businesses today are looking to drive new value from software, to increase competitiveness, open new revenue streams, and increase efficiencies,” he explains. “As part of this, the business often drives the software decisions, proof-of-concepts, vendor selection, and more.” It’s not until the end of the process that IT is brought in to “sign off and deploy”, and this siloed approach results in teams working separately, often producing poor results and driving animosity between the groups. “Instead, if the business and IT teams work together for the entire project, requirements are surfaced and expertise from across the organization is brought in to make the best possible decisions,” Maxey says. From his perspective, there are several best practices that can ensure closer alignment between IT and businesses. “Embed IT into the business unit, versus in a separate department and ask IT to project manage business software projects so they are always in discussions and aware of the process,” he says. 


IT leadership: Seven spectrums of choice for CIOs in 2023

Purpose is the first thing that we want people to be thinking about in light of the office shock that they have been going through. It’s a question for organizational leaders - what is the purpose of your organization? On the spectrum, we say that a purpose ranges from the individual to the collective. And it’s important to think about that because for an individual first starting out in the workplace, their purpose may be very straightforward in terms of supporting themselves and their family. But as they get further into their career, they can enlarge their thinking about a purpose that actually can make the world better. And the same thing is true for organizations – they may start out very focused on getting their business going, but later can think about how they can contribute to the world. And in that sense, another spectrum – outcomes – is very closely related. You may start out with your primary outcome being profit, but then once you’re established and comfortable, you can think much larger, like bringing prosperity to the world, whether that world is local or much larger.


The risks of 5G security

With 5G-enabled automated communications, machines and devices in homes, factories and on-the-go will communicate vast amounts of data with no human intervention, creating greater risk. Kayne McGladrey, field CISO at HyperProof and a member of IEEE, explained the dangers of such an approach. “Low-cost, high-speed and generally unmonitored networking devices provide threat actors a reliable and robust infrastructure for launching attacks or running command and control infrastructure that will take longer to detect and evict,” he said. McGladrey also pointed out that as organizations deploy 5G as a replacement for Wi-Fi, they may not correctly configure or manage the optional but recommended security controls. “While telecommunications providers will have adequate budget and staffing to ensure the security of their networks, private 5G networks may not and thus become an ideal target for a threat actor,” he said. 5G virtualized network architecture opens every door and window in the house to hackers because it creates — in fact, requires — an extraneous supply chain for software, hardware and services. 


Fujitsu: Quantum computers no threat to encryption just yet

Fujitsu said its researchers also estimate that it would be necessary for such a fault-tolerant quantum computer to work on the problem for about 104 days to successfully crack RSA. However, before anyone gets too complacent, it should be noted IBM's Osprey has three times the number of qubits that featured in its Eagle processor from the previous year, and the company is aiming to have a 4,158-qubit system by 2025. If it continues to advance at this pace, it may well surpass 10,000 qubits before the end of this decade. And we'd bet our bottom dollar intelligence agencies, such as America's NSA, are or will be all over quantum in case the tech manages to crack encryption. Quantum-resistant algorithms are therefore still worth the effort, even if the NSA is ostensibly skeptical of quantum computing's crypto-smashing powers. Fujitsu said that although its research indicates the limitations of quantum computing technology preclude the possibility of it beating current encryption algorithms in the short term, the IT giant will continue to evaluate the potential impact of increasingly powerful quantum systems on cryptography security.


State of DevOps: Success happens through platform engineering

The platform engineering team takes responsibility for designing and building self-service capabilities to minimise the amount of work developers need to do themselves. This, according to the report’s authors, enables fast-flow software delivery. Platform teams deliver shared infrastructure platforms to internal software development teams. The team responsible for the platform treats it as a product for its users, not just an IT project. ... Ronan Keenan, research director at Perforce, said the concepts behind platform engineering have been used on a small scale at large technology organisations for many years, but platform engineering provides a more concrete focus. “The concept is about building self-service capabilities which engineers and developers can use. This reduces their workload as they do not have to build these capabilities themselves,” he said, adding that a platform’s team builds and maintains shared IT infrastructure. By having such a shared infrastructure: “The software development process can run quicker since you are lightening the load on the developers and engineers. Platform engineering also offers a more consistent process.”


How Can Big Tech Layoffs be a Boon for the Quantum Computing Cloud?

The good news is that a skilled classical engineer can obtain the necessary knowledge from a variety of places, including online and short courses, to collaborate effectively with quantum physicists. Therefore, consider the possibility of recruiting someone with experience in conventional computing for those quantum organizations that are in desperate need of personnel to aid them in carrying out their goals. Not only might you discover that it’s simpler than you thought for these people to become productive in your organization, but they might also be able to use their prior experience working for traditional computing companies to their advantage and offer original solutions to any technical issues that arise there. However, the cloud may have a bright spot. The issue for quantum enterprises in finding appropriate people has frequently come up at conferences for the industry. Some of that was brought on in recent years by the fierce competition from the traditional computer companies, who increased their development efforts during the Covid years and also implemented work-from-home policies to make it simpler for someone to join an organization with its headquarters in a different city.


Attackers use portable executables of remote management software to great effect

The phishing emails are help desk-themed – e.g., impersonate the Geek Squad or GeekSupport – and “threaten” the recipient with the renewal of a pricy service/subscription. The goal is to get the recipient to call a specific phone number manned by the attackers, who then try to convince the target to install the remote management software. “CISA noted that the actors did not install downloaded RMM clients on the compromised host. Instead, the actors downloaded AnyDesk and ScreenConnect as self-contained, portable executables configured to connect to the actor’s RMM server,” the agency explained. “Portable executables launch within the user’s context without installation. Because portable executables do not require administrator privileges, they can allow execution of unapproved software even if a risk management control may be in place to audit or block the same software’s installation on the network. Threat actors can leverage a portable executable with local user rights to attack other vulnerable machines within the local intranet or establish long term persistent access as a local user service.”


The Anticipation Game: Spotlight on Data Backups

Regardless of how reliable a storage platform is, keeping all critical data stored in one place is a disaster waiting to happen for any organisation. To avoid the pains of security breaches, ransom payments, and data leaks, companies should aim to create and distribute backup copies across multiple onsite and offsite storage destinations. Another way to truly keep ransomware at bay is to apply immutability for backup data. Immutability means data is stored in such a way that it cannot be altered, deleted, or encrypted by ransomware. The ideal data backup solution should have a well-rounded set of ransomware protection and recovery features, allowing customers to achieve near-zero downtime and avoid paying ransom in return for access to the data. For example, the capability to store backups in ransomware-resilient Amazon S3 buckets and hardened Linux-based local repositories to prevent data deletion or encryption by ransomware. Ideally, IT admin teams would be able to leverage a backup to tape functionality to create air-gapped backups on tape to reduce the chance of ransomware encryption.


B2B integration is the backbone of a resilient supply chain: OpenText study

Advanced supply chain integration capabilities can help support more efficient and effective current approaches as well as new models that translate directly to business performance. ... B2B integration capabilities and processing align with top business priorities of reducing operational and logistical costs, faster time to market, improving data quality/accuracy and progressing visibility. Recognizing the need for a seamless B2B integration and a future-proof supply chain, OpenText offers a portfolio of end-to-end solutions through the OpenText Business Network Cloud. This network provides users with the ability to automate business processes and facilitate efficient, secure, and compliant collaboration between people, systems, and things – providing a true foundation for establishing an advanced digital backbone to help support business growth and transformation initiatives. By connecting to OpenText’s powerful suite of cloud applications via our secure, scalable and highly reliable OpenText Trading Grid platform, users can allow internal and external stakeholders to collaborate seamlessly across this single and central network to exchange transactions such as purchase orders, shipment notices and payment instructions.


Five steps to build a business case for data and analytics governance

The causal relationship between poor data and analytics and poor business performance must be highlighted if a compelling business case for governance is to be made. Initially, look to identify the business processes and process owners that are critical in addressing the problem statement. These will often span multiple business areas, so look to focus on key processes rather than on lines of business. This will help break down the silos that have led to the insular and disconnected governance of data and analytics. Determine the most impactful key performance indicators (KPIs) and key risk indicators (KRIs) for business success, and then identify the specific data and analytics assets that are used in the KPIs and KRIs. These assets are the ones that must fall within the scope of the data and analytics governance proposal. A key characteristic of highly successful D&A governance initiatives is their ability to effectively define and manage scope. Be clear on what is in scope and what is out of scope for governance while identifying the key stakeholders needed in the D&A governance steering group. 



Quote for the day:

"The litmus test for our success as Leaders is not how many people we are leading, but how many we are transforming into leaders" -- Kayode Fayemi

Daily Tech Digest - January 25, 2023

How Quantum Metric is using data analytics to optimize digital teams

Quantum Metric was Ciabarra’s attempt to solve problems he personally faced while running his online app store, Intelliborn. As the company grew to over one million active users per day, he uncovered how difficult it was to see and understand all of his customers at scale, and in real time. “I had used Google Analytics, which was great to see how traffic was growing, but it couldn’t tell me where my customers were struggling, and why. I would fix something that someone on Twitter was ‘yelling’ at me about, but it sometimes would impact my business, and sometimes it wouldn’t,” Ciabarra told VentureBeat. “I thought — why is this so hard? Maybe addressing the squeaky wheel didn’t make sense from a business perspective.” That sparked the idea for Quantum Metric. So, with his cofounding engineer, David Wang, alongside his cat, Indy, Ciabarra went on to develop the first version of the Quantum Metric platform. It focused on surfacing customer frustrations and helping organizations see their customer experience through session replays.


Creating a competitive edge with a cloud maturity strategy

Companies cannot become cloud mature overnight. Cloud maturity involves a strategic effort from all levels of the businesses to look carefully at cloud spend, mitigate cloud-related risks, and upskill workers in cloud technologies. Those that manage to achieve a high level of cloud maturity remain much more competitive than firms that stop at merely adopting cloud technologies. According to McKinsey, Fortune 500 companies could earn more than $1 trillion dollars by 2030 as a result of cloud adoption and optimisation. Deutsche Bank recognised that in order to keep up with the future of banking and remain competitive, it needed to become more cloud mature. ... Cloud maturity is essential to a company’s success – but first leaders need to make sure their employees are equipped with the skills required to solve security issues. Only then will businesses be ready to implement the right strategies to maximise their return on investment and realise the full potential of cloud computing.


CNET Is Testing an AI Engine. Here's What We've Learned, Mistakes and All

Over the past 25 years, CNET built its expertise in testing and assessing new technology to separate the hype from reality and help drive conversations about how those advancements can solve real-world problems. That same approach applies to how we do our work, which is guided by two key principles: We stand by the integrity and quality of the information we provide our readers, and we believe you can create a better future when you embrace new ideas. The case for AI-drafted stories and next-generation storytelling tools is compelling, especially as the tech evolves with new tools like ChatGPT. These tools can help media companies like ours create useful stories that offer readers the expert advice they need, deliver more personalized content and give writers and editors more time to test, evaluate, research and report in their areas of expertise. In November, one of our editorial teams, CNET Money, launched a test using an internally designed AI engine – not ChatGPT – to help editors create a set of basic explainers around financial services topics. 


Common Misconceptions About Modern Ransomware

Not too long ago, if someone decided to pay a ransom, they might not receive the decryption keys after doing so. However, today, ransom payers usually do receive the keys. This was a quiet shift that took place over several years. Before this shift took place, the unsophisticated encryption process could be considered hit or miss. Today, ransomware and threat actors hit more than they miss. Often, they can encrypt most of the data—and do so quickly. Just several years ago, a threat group would take many months to move around in a network, find data sources, monitor traffic and begin an encryption process. Fast forward to today, and the average attack-to-encryption time is 4.5 days. During the early days of ransomware attacks, threat groups would occasionally move to a domain controller and gain access to an active directory. This granted them the keys to the kingdom and had a detrimental effect on the victim organization. Today, because of poor active directory security and configurations, threat groups can often elevate their credentials and their own active directory rapidly.


Can AI replace cloud architects?

The most likely path is that tactical AI tools will continue to appear. These tools will focus on specific architectural areas, such as network design, database design, platform selection, cloud-native design, security, governance, use of containers, etc. The output should be as good as, if not better than what we see today because these tools will leverage almost perfect data and won’t have those pesky human frailties that drive some architecture designs—emotions and feelings. Of course, some of these AI tools exist today (don’t tell me about your tool) and are progressing toward this ideal. However, their usefulness varies depending on the task. Tactical AI tools must still be operated by knowledgeable people who understand how to ask the right questions and validate the designs and recommendations the tool produces. Although it may take fewer people to pull off the tactical component design of a large cloud architecture, the process will not likely eliminate all humans. Remember, many of these mistakes occur because enterprises have difficulty finding skilled cloud pros. 


Chinese threat actor DragonSpark targets East Asian businesses

SparkRAT uses WebSocket protocol to communicate with the C2 server and features an upgrade system. This allows the RAT to automatically upgrade itself to the latest version available on the C2 server upon start-up by issuing an upgrade request. “This is an HTTP POST request, with the commit query parameter storing the current version of the tool,” researchers noted. In the attacks analyzed by the researchers, the SparkRAT version used was built on November 1, 2022, and deployed 26 commands. “Since SparkRAT is a multi-platform and feature-rich tool, and is regularly updated with new features, we estimate that the RAT will remain attractive to cybercriminals and other threat actors in the future,” researchers said. DragonSpark also uses Golang-based m6699.exe, to interpret runtime encoded source code and launch a shellcode loader. This initial shellcode loader contacts the C2 server and executes the next-stage shellcode loader.


Microsoft to Block Excel Add-ins to Stop Office Exploits

Excel add-in files are designated with the XLL file extension. They provide a way to use third-party tools and functions in Microsoft Excel that aren't natively part of the software; they're similar to dynamic link libraries (DLLs) but with specific features for Excel spreadsheets. For cyberattackers, they offer a way to read and write data within spreadsheets, add custom functions, and interact with Excel objects across platforms, Vanja Svajcer, a researcher with Cisco's Talos group, said in a December analysis. And indeed, attackers started experimenting with XLLs in 2017, with more widespread usage coming after the technique became part of common malware frameworks, such as Dridex. ... One of the reasons for that is because Microsoft Office does not block the feature but raises a dialogue box instead, a common approach that Microsoft has taken in the past, Svajcer wrote: "Before an XLL file is loaded, Excel displays a warning about the possibility of malicious code being included. Unfortunately, this protection technique is often ineffective as a protection against the malicious code, as many users tend to disregard the warning."


The Intersection of Trust and Employee Productivity

Unfortunately, many companies adopt a "block first, ask questions later" approach to security, which can erode employee trust and undermine the benefits of empowering employees to choose their own applications. In our previous research at Cerby, we found that 19% of employees ignore application blocks and continue to use the apps they prefer, despite such restrictions. This suggests that organizations should seek to balance high levels of trust in employees with zero trust principles for data, applications, assets and services (DAAS). A more effective approach may be to adopt an enrollment-based approach to security that balances trust-positive initiatives like employee choice of applications with cybersecurity and compliance requirements. By adopting this approach, organizations can build digital trust with employees by giving them more control over the tools and technologies they use while still ensuring the security and reliability of their systems and processes for consumers. But the benefits of building high levels of employee trust go beyond improved job performance and satisfaction. 


Examining the CIO time management dilemma

The skill profile and expectations of the CIO have, therefore, shifted to balance both business management with technology, so, where necessary, CIOs need to bolster those skills accordingly to deliver the right solutions for the business. “What makes a strong CIO is being able to recognize where the blind spots in their skill sets are and bring supplemental skills in with other leaders in the organization,” she adds. So the CIO role has evolved into this business manager position to understand how technology delivers value to the business. “And because technology is becoming the way we do business, it becomes imperative for the CIO to have that business acumen in addition to the technology,” she says, adding having that acumen is necessary to articulate justifying investment in it to enable organizational growth. In addition, as CEOs have increased their investment into digital advances in security, AI, and data analytics, their demand for results has grown, according to Gartner VP analyst Daniel Sanchez-Reina. 


Cloud egress costs: What they are and how to dodge them

Egress charges work the other way, by discouraging firms from transferring data out, either to other cloud providers, or to on-premise systems. “They’ve made the commercial decision that ingress should be effectively absorbed within the consolidated cost of service represented in the unit prices of cloud components, but egress charges are separated out,” says Adrian Bradley, head of cloud transformation at consulting firm KPMG. “At the heart of that, it is a real cost. The more a client consumes of it, the more it costs the cloud providers.” Firms have seen egress charges rise as they look to do more with their data, such as mining archives for business intelligence purposes, or to train artificial intelligence (AI) engines. Data transfers can also increase where organisations have a formalised hybrid or multi-cloud strategy. “Either there’s a need to do a lot more data egress, or perhaps there’s just simply the positive use of cloud to develop new products and services that intrinsically use more data,” says Bradley. The result is that firms are moving more data from cloud storage, and are being hit by increasing costs.



Quote for the day:

"Leadership does not depend on being right." -- Ivan Illich

Daily Tech Digest - January 24, 2023

Status of Ethical Standards in Emerging Tech

This chasm between invention and accountability is the source of much of the angst, dismay, and danger. “It is much better to design a system for transparency and explainability from the beginning rather than to deal with unexplainable outcomes that are causing harm once the system is already deployed,” says Jeanna Matthews, professor of computer science at Clarkson University and co-chair of the ACM US Technology Committee’s Subcommittee on AI & Algorithms. To that end, the Association for Computing Machinery’s global Technology Policy Council (TPC) released a new Statement on Principles for Responsible Algorithmic Systems authored jointly by its US and Europe Technology Policy Committees in October 2022. The statement includes nine instrumental principles: Legitimacy and Competency; Minimizing Harm; Security and Privacy; Transparency; Interpretability and Explainability; Maintainability; Contestability and Auditability; Accountability and Responsibility; and Limiting Environmental Impacts, according to Matthews.


Best practices for devops observability

A selected group of engineers may have the lead responsibilities around software quality, but they will need the full dev team to drive continuous improvements. David Ben Shabat, vice president of R&D at Quali, recommends, “Organizations should strive to create what I would call ‘visibility as a standard.’ This allows your team to embrace a culture of end-to-end responsibility and maintain a focus on continuous improvements to your product.” One way to address responsibility is by creating and following a standardized taxonomy and message format for logs and other observability data. Agile development teams should assign a teammate to review logs every sprint and add alerts for new error conditions. Ben Shabat adds, “Also, automate as many processes as possible while using logs and metrics as a gauge for successful performance.” Ashwin Rajeev, cofounder and CTO of Acceldata, agrees automation is key to driving observable applications and services. He says, “Modern devops observability solutions integrate with CI/CD tools, analyze all relevant data sources, use automation to provide actionable insights, and provide real-time recommendations.


Why leveraging privacy-enhancing tech advances consumer data privacy and protection

Historically, proprietary privacy-enhancing technologies have been developed by location technology companies and used internally. However, it’s my firm belief that for organizations of all types to truly progress toward the level of consumer data privacy people want and expect, privacy-enhancing technologies created by location technology companies should be made available to all companies that could benefit from these advancements. ... These tools help add industry-leading privacy controls to a company’s own systems and work with any kind of location data, no matter how it is generated. This helps ensure that a company is meeting privacy requirements and protecting consumer data. If more technology companies made the privacy-enhancing features used in their own systems available to other companies, organizations across industries could better protect the data stored in their systems, and in turn, consumer data privacy and protection is likely to progress and improve more quickly. A crucial starting point is democratizing access to these technologies.


What’s To Come In 2023? Modern Frameworks, CISO Elevation & Leaner Security Stacks

The past year has shown the effects that whistleblowing (Twitter) can have when an organization ignores its employees flagging activity they consider fraudulent, unsafe, or illegal. But over the past year, we have also seen the consequences when CISOs actively ignore or hide security issues. For example, in the Uber situation, we saw for the first time criminal charges filed and then later a conviction. These contrasting stories create a potential no-win situation for CISOs who, on the one hand, may be ignored for calling out issues or could face jail time if they actively turn a blind eye (and/or hide) them. ... With the beginning of 2023 fraught with enormous economic and regulatory uncertainty, we will likely see a consolidation of tools and a greater focus on which tools are necessary. The nature of tech is that many organizations adopt tools to fix immediate problems, and often these tools have overlapping functionality and use cases. Although security budgets are likely to be a bit safer than other departments in a business, security teams will still need to consider what they must have to be successful with fewer resources. 


The Benefits of an API-First Approach to Building Microservices

APIs have been around for decades. But they are no longer simply “application programming interfaces”. At their heart APIs are developer interfaces. Like any user interface, APIs need planning, design, and testing. API‑first is about acknowledging and prioritizing the importance of connectivity and simplicity across all the teams operating and using APIs. It prioritizes communication, reuseability, and functionality for API consumers, who are almost always developers. There are many paths to API‑first, but a design‑led approach to software development is the end goal for most companies embarking on an API‑first journey. In practice, this approach means API are completely defined before implementation. Work begins with designing and documenting how the API will function. ... In the typical enterprise microservice and API landscape, there are more components in play than a Platform Ops team can keep track of day to day. Embracing and adopting a standard, machine‑readable API specification helps teams understand, monitor, and make decisions about the APIs currently operating in their environments.


MITRE ATT&CK Framework: Discerning a Threat Actor’s Mindsetm

Many security solutions offer a wide range of features to detect and track malicious behavior in containers. Defense evasion techniques are meant to obfuscate these tools so that everything the bad actor is doing seems legitimate. One example of defense evasion includes building the container image directly on the host instead of pulling from public or private registries. There are also evasion techniques that are harder to identify, such as those based on reverse forensics. Attackers use these techniques to delete all logs and events related to their malicious activities so that the administrator of a security, security information and event management (SIEM), or observability, tool has no idea that an unauthorized event or process has occurred. To protect against defense evasion, you’ll need a container security solution that detects malware during runtime and provides threat detection and blocking capabilities. Two examples of this would be runtime threat defense to protect against malware and honeypots to capture malicious actors and activity.


CIO role: 5 strategies for success in 2023

CIOs must adapt to the changing business landscape brought on by the pandemic. With many organizations embracing hybrid work, the internet plays a more prominent role in the overall network strategy. Ensure that your systems and processes are optimized for this new reality. This includes prioritizing the user experience of remote workers and implementing better end-user experience monitoring to ensure that they can be productive and collaborate effectively.  ... As organizations increasingly adopt multi-cloud systems to manage their IT infrastructure, CIOs must be able to navigate the complexity of these environments effectively. One approach is implementing a seamless strategy across all major clouds to streamline management and reduce complexity. Consider how you can optimize performance and apply security uniformly across your multi-cloud estate. Also, be mindful of the changing regulatory and compliance landscape and look for cloud services with built-in compliance features to minimize the burden on your teams.


How passkeys are changing authentication

The latest in FIDO passkeys specs are multi-device. Once a passkey is established for a given service, the same device can be used to securely share it with another device. The devices must be in close proximity, within range of wirelessly connecting, and the user takes an active role in verifying the device sync. The remote cloud service for the given device also plays a role. That means that an iPhone uses Apple's cloud, an Android device uses Google Cloud Platform (GCP), and Windows uses Microsoft Azure. Efforts are underway to make sharing passkeys across providers simpler. It's a rather manual process to share across providers, for example, to go from an Android device to a MacOS laptop. Passkeys are cryptographic keys, so gone is the possibility of weak passwords. They do not share vulnerable information, so many password attack vectors are eliminated. Passkeys are resistant to phishing and other social engineering attacks: the passkey infrastructure itself negotiates the verification process and isn’t fooled by a good fake website -- no more accidentally typing a password into the wrong form.


CIOs sharpen tech strategies to support hybrid work

With competition for talent still tight and pressure on organizations to maximize employee productivity, Anthony Abbatiello, workforce transformation practice leader at professional services firm PwC, says CIOs should focus on what and how they can improve the hybrid experience for users. He advises CIOs to partner with their counterparts in HR to identify the worker archetypes that exist in their organizations to understand how they work and what they need to succeed. “CIOs should be asking how to create the right experience that each worker needs and what do they need to be productive in their job,” Abbatiello says. “Even if you’ve done that before, the requirements of people in a hybrid environment have changed.” Hybrid workers today are looking for digital workplace experiences that are seamless as they move between home and office, Abbatiello says. This include technologies that enable them to replicate in cyberspace the personal connections and spontaneous collegiality that more easily happen in person, as they seeking experiences that are consistent regardless of where they’re working on any given day.


Platform Engineering 101: What You Need to Know About This Hot New Trend

Before platform teams can start building their product, they need to define a clear mission statement to guide the process. This mission statement should fit the overall goals of the organization and proactively define the role of the platform team within the organization. It should also inspire your engineers. Hashicorp’s Director of Platform Engineering Infrastructure Michael Galloway summarizes this well: “it should be emotional and inspiring. … It should be simple but meaningful.” You can start by defining your goals. This could encompass things like enabling the required degree of developer self-service without adding cognitive load or achieving the desired reduction of tickets that go to ops without forcing developers to learn infrastructure-centric technologies end-to-end. After this, you’ll probably wind up with something like: “Our mission is to standardize workflows to improve the developer experience, speed up innovation cycles, and shorten time to market for the engineering organization.” It’s descriptive but not inspiring. Refine your mission statement to strike a good balance. For example: “Our mission is to build a platform developers love because it lets them innovate faster.”



Quote for the day:

"Leadership does not always wear the harness of compromise." -- Woodrow Wilson

Daily Tech Digest - January 23, 2023

CIOs step up to fill plus-size leadership roles

While it’s common for others in the C-suite to take on so-called plus-roles, the CIO has historically worn multiple hats without the benefit or glory of a formal title marking their expanded terrain. The successful shift to remote work and accelerated digital transformation during the pandemic years elevated CIOs’ stature, earning trust and recognition from business leaders that was absent in the past. With technology now a pervasive enabler of all aspects of the business, CIOs have the end-to-end visibility and cross-functional insights that other executives lack, not to mention oversight of key enterprise assets such as data and analytics. “We always had business leaders take on other roles, but the CIO hasn’t been considered for one of those pluses — they were always rolled up underneath another executive that was,” says John Hill, senior vice president and chief digital information officer at MSC Industrial Supply, where he heads up digital strategy, ecommerce, marketing, vending solutions, metalworking innovation, and advanced analytics. 


ChatGPT Prompts for Agile Practitioners

“One important thing to keep in mind is that conflict is a natural part of any team and should be viewed as an opportunity for growth and improvement. As a Scrum Master, it's important to create an environment where team members feel comfortable discussing and resolving conflicts in a constructive manner. It's also important to be aware that conflicts can often be the result of deeper underlying issues, such as lack of trust, poor communication, or unmet needs. As a Scrum Master, it's important to be aware of these underlying issues and work with the team to address them. Additionally, it's important to be aware of the team's dynamics and to be mindful of the impact that conflicts can have on individuals and the team as a whole. As a Scrum Master, it's important to be aware of any team members who may be more affected by the conflict and to provide them with additional support as needed. Finally, it is important to be aware of the impact of external factors on the team, such as changes in the organizational structure or market conditions, and to take them into account when addressing conflicts and underlying issues.


3 predictions for open source in confidential computing

Confidential computing is the practice of isolating sensitive data and the techniques used to process it. This is as important on your laptop, where your data must be isolated from other applications, as it is on the cloud, where your data must be isolated from thousands of other containers and user accounts. As you can imagine, open source is a significant component for ensuring that what you believe is confidential is actually confidential. This is because security teams can audit the code of an open source project. ... In the past year, a lot of discovery and educational activities were developed. Confidential computing is now better known, but it has yet to become a mainstream technology. The security and developer communities are gaining a better understanding of confidential computing and its benefits. If this discovery trend continues this year, it can influence more outlets, like conferences, magazines, and publications. This shows that these entities recognize the value of confidential computing. In time, they may start to offer more airtime for talks and articles on the subject.


Exploring Cloud-Native Acceleration of Data Governance

Any seasoned data governance practitioner knows that implementing data management is only for those with extraordinary perseverance. Identifying and documenting data applications and flows, measuring and remediating data quality, and managing metadata — despite a flood of tools that attempt to automate many data management activities, these typically remain manual and time-consuming. They are also expensive. At leading organizations, data governance and remediation programs can top $10 million per year; in some cases, even $100 million. Data governance can be seen as a cost to the organization and a blocker for business leaders with a transformation mandate. So, here appears our “so-what.” If data governance could be incorporated directly into the fibers of the data infrastructure while the architectural blocks are being built and connected, it would dramatically reduce the need for costly data governance programs later. We will review three essential design features that, once incorporated, ensure that data governance is embedded “by design” rather than by brute manual force.


Crossplane: A Package-Based Approach to Platform Building

While Crossplane allows users to install many packages alongside one another, a common pattern has emerged of defining a single “root” package, which is an ancestor node of every other package that gets installed in a control plane. Doing so makes the entire API surface area reproducible by installing that one package. As an organization grows and evolves over time, that package can be expanded by defining new APIs or establishing new dependencies. ... Furthermore, because using a root package is a convention rather than a technical constraint, the property of composability is not violated. Taking the previously mentioned MySQL Configuration package for example, a user may install it as the root package of their MySQL database control plane, while another user may depend on it as only one component of a much larger API surface for use cases like internal cloud platforms. Declaring a Vendor Dependency While the attributes of Crossplane packages enable platform builders to quickly add functionality to their control planes, they also present a unique distribution mechanism for vendors. 


The loneliness of leading a cybersecurity startup

When building something unprecedented and game-changing, the course and rules are steeped in darkness and uncertainty, with naysayers, critics, board members, competitors and time itself hurling criticisms every step of the way. Why on earth would anyone subject themselves to that? Perhaps it should be made clear that, for most of the aspiring CEOs I meet, their career path is hardly a choice. Entrepreneurship often runs in their blood, serving as a defining force for the people who pursue it. This all-consuming nature has become a necessary qualifier for most investors; It’s an important tool for survival, as well as for success. This is not to discount the absolute necessity of vision, inspiration and faith in a good idea. But on the long road to entrepreneurial success, these are often subject to so much scrutiny and judgement that a different strength is necessary to stay the course. These days, tightening budgets and dreaded potential layoffs only add to the pressure they feel. However, during the toughest times, an entrepreneur’s hunger to build can provide the critical momentum they need to move forward.


IT hiring: How to find the right match

A strong company mission is not only crucial to attracting talent, but it will also motivate existing employees to do their work well. I am motivated to push through the complex problems I face in my work, for example, because I know that my company is solving some of the most significant issues in the healthcare industry. My work ultimately brings a better experience to hospital staff and patients, which leads to improved patient outcomes. In every organization, it is crucial for executives and board members to focus on agreed-upon goals and to share with employees how the company is meeting them regularly. Furthermore, employees seek a collaborative environment where they work together toward a common goal. ... Intelligent, scrappy workers are often attracted to startups, allowing ambitious employees to try many hats and build new skills quickly. I am fascinated by the technical and intellectual aspects of my job and find them highly rewarding. My teammates and I don’t know all the answers as we work with new models, so we must test our hypotheses and rethink our approach.


Pentagon must act now on quantum computing or be eclipsed by rivals

Many experts, including Spirk, believe that military applications for quantum computing could be less than 10 years away. Case in point: according to the Pentagon’s annual report on Chinese military power, China recently designed and fabricated a quantum computer capable of outperforming a classical high-performance computer for a specific problem. This is also why DARPA announced the ‘Underexplored Systems for Utility-Scale Quantum Computing’ (US2QC) program to explore potentially overlooked methods by which quantum computers could achieve practical levels of utilization much faster than current predictions suggest. The White House recently signed the Quantum Computing Cybersecurity Preparedness Act into law, signaling that it regards quantum as a serious issue. The act addresses the migration of executive agencies’ IT systems to post-quantum cryptography (PQC) - encryption which is secure from attacks by quantum computers because of the advanced mathematics underpinning it.


How Will the AI Bill of Rights Affect AI Development?

The AIBoR states that AI systems should be transparent and explainable, and not discriminate against individuals based on various protected characteristics. “This would require AI developers to design and build transparent and fair systems and carefully consider their systems’ potential impacts on individuals and society,” explains Bassel Haidar, artificial intelligence and machine language practice lead at Guidehouse, a business consulting firm. Creating transparent and fair AI systems could involve using techniques, such as feature attribution, that can help identify the factors that influenced a particular AI-driven decision or prediction, Haidar says. “It could also involve using techniques such as local interpretable model-agnostic explanations (LIME), which can help to explain the decision-making process of a black-box AI model.” AI developers will have to thoroughly test and validate AI systems to ensure that they function properly and make accurate predictions, Haidar says. “Additionally, they will need to employ bias detection and mitigation techniques to help identify and reduce potential biases in the AI system.”


The metaverse brings a new breed of threats to challenge privacy and security gatekeepers

While security experts point to authentication and access controls to protect against metaverse-based scams and attacks, the growing number of platforms providing access to the metaverse may or may not have secure mechanisms for recognizing frauds, says Paul Carlisle Kletchka, governance, risk, and compliance (GRC) analyst with Lynx Technology Partners, a provider of GRC services. “One of the major vulnerabilities is the lack of standardized security protocols or mechanisms in place across the platforms,” he says. “As a result, cybercriminals can use the metaverse for a variety of purposes such as identity theft, fraud, or malicious attacks on other users. Since people can download programs and files from within the metaverse, there is also a risk that these files could contain malware that could infect a user's computer or device and spread back into the organization’s systems. Another threat is piracy: since the metaverse is still in its early stages of development, there are no laws or regulations written specifically for the metaverse to protect intellectual property within this digital environment.



Quote for the day:

"A leader is judged not by the length of his reign but by the decisions he makes." -- Klingon Proverb

Daily Tech Digest - January 22, 2023

Top Humanoid Robots Innovations So Far

Sophia is a social humanoid robot. She was activated in February 2016 and made her first public appearance at South by Southwest Festival in mid-March 2016 in Austin, TX. Since its launch, Sophia has garnered a lot of media coverage, featuring numerous high-profile interviews, events, and panel discussions across the world. ... Toyota T-HR3 is a third-generation humanoid robot, which was designed from the get-go to be remote-controlled by a human. It is 1.5-meter tall, weighs 75 kilograms, and has torque-controlled freedom of 32 degrees with a pair of 10 fingered hands. The robot is designed to be a platform with capabilities that can safely assist people in a different variety of settings like home, medical facilities, disaster-stricken areas, construction sites, and outer space. ... E2-DR is a disaster response robot from Honda that is able to navigate through dangerous, complex environments. The robot looks like a humanoid, and heavier and tougher than the company’s Asimo, first presented in 2000. Honda E2-DR is designed to perform as a rescuer in a broad range of situations dangerous for human rescuers 


OpenAI, ChatGPT and the intensifying competition for data management within the supercloud

What many industry analysts are seeing, much to the chagrin of large data/search players like Google, is that OpenAI has leaped to the forefront of providing the capabilities to handle the data requirements of the supercloud. A lot of this is due to the concentrated capabilities within ChatGPT born from tedious underlying work, such as the training of machine learning models, according to Xu. As a result, companies need to be proactive enough to see the AI technologies as critical to a supercloud future instead of just being in the count while leaving AIOps on the back burner. For most of the Fortune 500 companies, your job is to survive the big revolution,” Xu said. “So you at least need to do your walmart.com sooner than later and not be like GE with a lot of the hand-waving.” Microsoft, for its part, has shown some of that foresight, as it’s recently invested around $10B into OpenAI and worked with the company across several areas, including its OpenAPI services. 


Overcoming Challenges in Privacy Engineering

The bigger the company, the greater the likelihood that there’ll be considerable amounts of legacy code lurking in the depths of the organization’s systems. Very few developers properly understand legacy code, so it’s usually highly opaque. Some employees might know the connections for some of the lines of code, and some sections might have been replaced more recently, but in general there’s very poor visibility into which services are related to which database, which services are sharing data with which other services, and other aspects of legacy code. On top of all this, data mapping projects are caught in a tech version of Zeno’s paradox. Most of the projects that are being mapped are live projects, which means that more data, more tables, and more connections are being added on a continual basis. But most data mapping is currently carried out manually. The map is out of date as soon as it’s completed, because of the speed at which live projects expand. There’s no way that human employees can keep up with the pace at which new data and relationships are added to the project. 


Cisco Report Shows Cybersecurity Resilience as Top of Mind

The report delves into the factors that could provide the biggest gains in enterprise security resilience, whether based on culture, IT environment, or security technology. Cisco took these factors and devised a security resilience scoring system based on seven areas. Those most closely adhering to these core principles are in the top 10% of resilient businesses. Those missing most of these elements are in the bottom 10%. Culture is especially vital. Those with poor security support from the C-suite score 39% lower than those with strong executive support. Similarly, those with a thriving security culture score 46% higher than those lacking it. But it isn’t all about culture. Staffing, too, played a definite role, whether based on experienced staff, certification and training, or the sheer number of internal resources. The report shows those companies maintaining extra internal staffing and resources to respond to incidents gain a 15% boost in resilient outcomes. In other words, headcount can mean the difference between faring well and poorly during an event. 


How distributed architecture can improve the resilience of your organization

Distributed architecture is not exactly a new thing to the average IT department, but organizations aren’t always aware of all the benefits that it provides – things like improved scalability, performance, cost savings and resiliency. ... Cost savings is a common driver for establishing a distributed architecture. By setting up multiple nodes, you can route traffic through the nearest node instead of relying on more simple call trafficking rules - like all participants connect to the node closest to the first person to join the call. Bandwidth consumption on WAN networks can be very expensive, with transatlantic costs especially high. With Pexip, nodes can be placed within your internal network to reduce the cost of the traffic on WAN networks. An added cost-saving feature from Pexip comes from our media transcoding. Media streams coming back from Pexip are reduced in size as they travel between nodes. Since Pexip handles the compute, you’re left with a more efficient media traffic flow that costs less. Distributed architecture means that your entire deployment is more resilient. 


Hardening The Last Line Of Defence For Financial Organizatins

IT infrastructure and security operations teams live in two worlds that are often separated by design. Whilst the SecOps teams want to regulate all access as strictly as possible, the IT infrastructure teams need to be allowed to access all important systems for backup. Many of these teams are not collaborating as effectively as possible to address growing cyber threats, as a recent survey found out. Those respondents who believe collaboration is weak between IT and security, nearly half of respondents believe their organisation is more exposed to cyber threats as a result. For true cyber resilience these teams must work closely together, as the high number of successful attacks proves that attack vectors are changing and it’s not just about defence, but backup and recovery. ... If financial organisations want to achieve real cyber resilience and successfully recover critical data even during an attack, they will have to modernise their backup and disaster recovery infrastructure and migrate to modern approaches such as a next-gen data management platform.


Effective business continuity requires evolution and a plan

IT and cybersecurity teams can work with other business decision-makers to assess risk levels for each system. This involves comparing the organization's business model against the IT infrastructure to determine which systems are mission-critical to operations. During the risk analysis, key considerations -- such as whether the organization can survive without email for a week, what systems are regularly backed up and what systems are cloud-based vs. on premises -- should be weighed and addressed. Organizations may want to assign tiers to each system to define which ones must be restored the fastest. It's often the safest course to colocate critical systems or keep certain backup systems offline. Ensure the colocation isn't connected to the corporate network via Active Directory and that it's segmented from other systems, as compromises can occur if the colocation is the primary environment for data storage and has a connection to the corporate network. Colocation lets organizations bring the most essential systems back online and continue operations, even if core systems have been breached or otherwise disrupted.


Four Steps To Self-Service Data Governance Success

Data governance can help teams oversee and control access to confidential information. You could unlock automation for data security faster with a no-code/low-code approach. A no-code approach could make self-service data governance easier by handling all of the complicated things behind a simple interface. Your data teams won't have to write hundreds of lines of code to handle complex, repetitive procedures like applying granular access policies to many users simultaneously. To simplify your transition to no-code, start with a pilot. Look for no-code/low-code technology that lets you move quickly into implementation. Prioritize options that let you sign on to the service in minutes without requiring long-term contracts. Then, connect your cloud database and control access with classification-based policies that don't require your team to write code to allow only approved users to view the data. When the situation calls for more customization, like trying to see who has access to your cloud database, test the low-code capability. ... A no-code/low-code capability could make the job of managing data governance infinitely easier.


Top 5 Considerations for Better Security in Your CI/CD Pipeline

Securing running microservices is just as crucial to an effective CI/CD security solution as is preventing application breaches by moving security to the pipeline’s earlier stages. The context necessary to comprehend Kubernetes structures — such as namespace, pods and labels — is not provided by conventional next-generation firewalls (NGFW). Once the perimeter has been compromised, the risk of implicit trust and flat networks on thwarting external attacks provides attackers a great deal of surface. As a result, it’s important to leverage a platform that enables continuous security and centralized policy and visibility for efficient and effective continuous runtime security. The majority of application teams automate their build process using build tools like Jenkins. Security solutions must be included in popular build frameworks to bring security to a build pipeline. Such integration enables teams to pick up new skills quickly and pass or fail builds depending on the requirements of their organization. 


4 High-Impact Data Quality Issues That Are Easily Avoidable

In the modern data stack, data quality issues can range from semantic and subjective – which are hard to define – to operational and objective, which are easy to define. For instance, objective and easier-to-define issues would be data showing up with empty fields, duplicate transactions being recorded, or even missing transactions. More concrete, operational issues could be data uploads not happening on time for critical reporting, or a data schema change that drops an important field. Whether a data quality issue is highly subjective or unambiguously objective depends on the layer of the data stack it originates from. A modern data stack and the teams supporting it are commonly structured into two broad layers: 1) the data platform or infrastructure layer; and, 2) the analytical and reporting layer. The platform team, made up of data engineers, maintains the data infrastructure and acts as the producer of data. This team serves the consumers at the analytical layer ranging from analytics engineers, data analysts, and business stakeholders.



Quote for the day:

"Don't be buffaloed by experts and elites. Experts often possess more data than judgement." -- Colin Powell