Daily Tech Digest - November 22, 2022

Multimodal Biometrics: The New Frontier Against Fraud

This new category includes instances when online criminals directly target consumers, often through a text, call, or email, rather than by obtaining a person’s personal information at the institutional level, a change in tactics in recent years that has significant consequences for both individuals and the companies they do business with. The consumer, Javelin says, has become “the path of least resistance.” Consumers aren’t the only ones affected by this change in approach. It has significantly altered the advice we give our banking and financial services customers, as well. ... Identity verification platforms with multi-modal biometrics and liveness detection offer next-generation levels of security. Even better, platforms now entering the market combine multi-modal biometrics and liveness detection with a frictionless, easy-to-use interface. With some, customers simply look into their phones or laptop cameras and say a phrase to easily and securely access an online account. This is the conversation my colleagues and I are having with our banking and financial institution customers.


The 5 Most Dangerous Cognitive Biases For Startup Founders

Confirmation bias is the tendency to search for information proving your already-established worldview, rather than disproving it. It is obvious that it’s crucial to try to avoid this when constructing your idea or product validation tests or when talking to customers. Don’t try to defend your assumptions and decisions - instead, try to gather unbiased feedback so that you would have a higher confidence level in the results of your tests. Fake confirmation of your ideas might make your life easier as it would give you a scapegoat for your failure. Yet, in the long run, it’s much better to have to overcome your ego and succeed than to defend it but ultimately fail. The tendency to rely heavily on the first piece of information you have on a topic. The anchoring bias is often used in negotiations as a trick to bring the expectations of the opposing party closer to your desired outcome. In startups, it is very important not to unwittingly play this trick on yourself. For example, if you’ve been offering a service for free you might feel reluctant to raise the price significantly even if it is the right thing to do for your business.


The rise of metaverse shopping

Even as the metaverse continues to gain popularity, it’s important for retailers to remember that it is still relatively new, she observed. “The reality is there are so many other channels for retailers to engage customers, such as web, mobile, in-store and social, and they need to also focus on strengthening those experiences,” Estes said. Brands should not be trying to match virtual experiences with traditional in-store experiences, Mason noted, as they are very different mediums and have different strengths for connecting with customers. “The key thing to remember is that metaverse experiences are new and opt-in,” he said. “They need to be fun and engaging for the user to find something worthwhile in them. After all, moving to a competing brand’s metaverse experience is just a click or a hand-wave away. It is important for companies to consider how their brand will translate to a new medium.” Brands should consider how their brand representatives will greet consumers. Will they be serious, fun or edgy? What kind of language and voice will be used, and how will their brand avatar present itself visually?


How intelligent automation will change the way we work

As organizations automate their business processes, there are many potential hazards to avoid. “The main one is ignoring your people and underestimating that,” Butterfield said. “Although the outcome is driven by using a technology, everything up to the actual automation of a process is generally very people-focused. A lack of change management will unfortunately cause many issues in the long term. Organizations need to keep their people aligned with their overall goals.” Security, mainly authentication, is also a key concern, Barbin said. “Any automation, API [application programming interface] or other, requires some means to pass access credentials,” he said. “If the systems that automate and contain those credentials are compromised, access to the connected systems could be too.” To help minimize that risk, Barbin suggests using SAML 2.0 and other technologies that take stored passwords out of the systems. Another pitfall is selecting only one technology as the automation tool of choice. Typically organizations need multiple technologies to get the best results, said Maureen Fleming, program vice president for intelligent process automation research at IDC.


How can IT leaders address ‘quiet quitting’?

While this is less likely to be an issue if staff are driven by the organisation’s vision and purpose, as is often the case with tech startups, it is still “important to look at what the expectations are on both sides, what’s reasonable and where compromises could be made”, she says. Klotz also suggests that part of the reason why some IT leaders, among others, have reacted so negatively to the idea of quiet quitting idea is over concerns that “paying extra for everything” could hit profit margins, which in turn could put the company out of business, particularly in economically difficult times. But he also points to the dynamic nature of the tech industry, which requires discretionary working at times simply to deliver on projects. “It’s only if you ask people to go above and beyond without compensation that it gets exploitative rather than being part of a healthy functioning relationship,” Klotz says. “But many companies ask employees to do extra almost as part of the job description, which is partly why they provide amazing benefits and such good compensation – people know what they’re getting into and are rewarded for it.”


Applying Enterprise Risk Management to Cyberrisk

Both the reality of cyberthreats and regulatory changes should make it clear to boards, owners and management that there is a need for better management of cybersecurity. Enterprise risk management (ERM) is a tool that management and the board can use to help manage risk across the enterprise, including cyberrisk. The Committee of Sponsoring Organizations of the Treadway Commission (COSO) ERM framework and International Organization for Standardization (ISO) 31000 are two prominent frameworks for ERM. Both frameworks emphasize that for effective ERM, an organization needs to have oversight from senior management, organizational structure to support ERM and qualified staff. These and other capabilities that are needed to support ERM are also necessary to support cybersecurity and manage cyberrisk; therefore, the contents of both frameworks are easily and aptly applied to cybersecurity. Organization can learn about the consequences of ineffective enterprise management of cybersecurity from many examples around the world including the 2021 ransomware attack on Ireland’s Health Services Executive (HSE). 


Why to Rethink and Update Approaches to Payment Security Management

“CISOs are increasingly challenged in their efforts to secure payment security compliance, and in convincing board members and other stakeholders of the importance and significance of securing strategic support and resources,” Hanson explains. In the 2022 Payment Security Report, it's pointed out how CISOs are often using outdated methods to secure support, and a change is needed for all stakeholders in approach. “Rather than taking a check-the-box approach to compliance, CISOs and other security leaders need to take an out-of-the box, thinker’s approach that involves implementing frameworks and models,” Hanson says. “This is especially true for those taking the Customized approach to compliance.” MacLeod says there are several key stakeholders in organizations who ensure payment security compliance, from the CEO and CIO across to the CISO and CFO -- and these roles are changing as the payments industry evolves. ... As a result, stakeholders such as the CIO and CISO are playing an increasingly important role in ensuring payment security compliance.


Five defence-in-depth layers to implement for business security success

Businesses have many wonderful applications at their fingertips, with the average user having access to 5-10+ high-value business apps. These contain sensitive resources such as customer information, intellectual property, and financial data, making them a key target for attackers. Unfortunately, 80 per cent of businesses have faced users misusing or abusing these apps in the last year. Simply requiring a login is not enough to keep them safe – the moment a user steps away from their screen while still logged in, all of that valuable data is exposed. The defensive layer: a login only verifies a user’s identity at one point – so effective security controls here will continue to monitor, record, and audit user actions after authentication. Enhancing the visibility available to security teams offers many benefits, including being able to identify the source of a security incident (and therefore respond) much quicker. ... Almost all businesses benefit from using third party tools, but they offer risks too, as integration often requires creating super-user access to clients’ systems. 


The future of IT: decentralization and collaboration

As the role of IT evolves and collaboration increases, IT leaders are increasingly working as partners – rather than technology gatekeepers – with department heads. This collaboration and decentralization of IT across the enterprise gives employees self-sufficiency and autonomy when making technology decisions for their departments. They no longer depend on the IT team for their process automation, tool choices, or technology operations. ... IT personnel must clarify to all employees which applications are allowed on the corporate network. Employees should always inform IT personnel about their use of non-sanctioned applications and devices. If employees are downloading non-sanctioned apps and using non-sanctioned devices to access the corporate network, the IT department may have trouble preventing malware from accessing the network. When employees are open and honest about the devices and applications they use, it is much easier for IT personnel to mitigate rogue downloads and keep the network safe. Also, with social engineering efforts on the rise, IT must teach all other employees about popular attack methods, such as phishing and business email compromise.


Craftleadership: Craft Your Leadership as Developers Craft Code

There are other common practices in software development that apply to management. First, organize your budgeting process as a CI/CD pipeline. Make budget definition something that is easily repeatable, and that fits in your organization. CI/CD allows you to get rid of fastidious tasks by putting them in a pipeline. Budgeting is one of the most fastidious things I have found I have to do as a manager. Second, master your tools. If MS Excel is the tool used by the managers in your organization, be an Excel master. Third, try to be reactive in your decisions, as in reactive programming. Be asynchronous when making decisions; as much as possible, try to reduce the “commit” phase, that is, the meetings where everyone must be present to say they agree. In my case, I think that it is necessary to maintain these meetings where everybody agrees on different things. Yet, in these meetings, I never address an issue that I haven’t had the time to discuss thoroughly with everyone beforehand- this could be through a simple asynchronous email loop where everyone had a chance to give his or her opinion.



Quote for the day:

"Successful leadership requires positive self-regard fused with optimism about a desired outcome." -- Warren Bennis

Daily Tech Digest - November 21, 2022

Achieve Defense-in-Depth in Multi-Cloud Environments

Many organizations are adopting log-based solutions (from endpoint to perimeter security), which is a good first step, but logs can be bypassed or disabled. Even worse, hackers can manipulate logs to give the appearance that “everything is fine,” when in fact, they are moving between users, resources and exfiltration. The solution to this problem is to normalize visibility across the locations where your organization’s data lives – from the cloud to on-prem, and data centers. Knowing that IT and Security teams rely on logs makes them attractive targets for hackers today. However, taking a defense-in-depth approach versus logs alone is now critical to ensuring that every single entry point to your organization is secure. Network intelligence plays a huge role in gaining visibility – it is the only way to ensure visibility into all of the data in motion across your entire infrastructure and prevent risks. ... Just like cloud infrastructure management is a shared responsibility within the organization, so must enterprise security including data security be a shared responsibility. 


A Serverless-First Mindset in an Evolving Landscape

A serverless-first mindset is no doubt beneficial in a number of ways, but some businesses may have reservations in terms of the potential for vendor lock-in, the security offered by the cloud provider, existing sunk costs and other issues in debugging and development environments. However, even among the most serverless-adverse, this mindset can provide benefits to a select part of an organisation. When looking at a bank’s operations for example, the continued uptime of the underlying network infrastructure is crucial for database access, and with a serverless-first mindset, employees have the flexibility to develop consumer-facing apps and other solutions as consumer demand increases. While the maintenance of a traditional network infrastructure is crucial for uptime of the underlying database, with a serverless approach they have the freedom to implement an agile mindset with consumer-facing apps and technologies as demand grows. Agile and serverless strategies typically go hand-in-hand, and both can encourage quick development, modification and adaptation.


IT talent: The 3 C's for life/work balance

Compensation and benefits are not just lifestyle issues. Although these have virtually nothing to do with how much we enjoy our time at work or how far and fast we advance our careers, they carry a lot of psychological value in our culture because they feed ego and self-esteem. Few people who love their job, have great career prospects, work for a wonderful boss, and have a short commute will move simply for the money. Conversely, many are looking to leave high-paying jobs because their boss is a jerk, the commute is too long, or their skills are outdated. Many candidates initially cite compensation as their top criterion to make a move. Still, I have yet to meet a candidate who would accept a position sight unseen without knowing specific details of the job’s other C's. Big money or great benefits have never made a bad job good. Compensation comes to mind first because it is tangible, measurable, and has psychological power, but underlying its number-one ranking is the assumption that all the other criteria are met. Like everything else, compensation and benefits for a specific role are determined by an ever-changing marketplace.


Extortion Economics: Ransomware's New Business Model

This industrialization of cybercrime has created specialized roles in the RaaS economy. When companies experience a breach, multiple cybercriminals are often involved at different stages of the intrusion. These threat actors can gain access by purchasing RaaS kits off the Dark Web, consisting of customer service support, bundled offers, user reviews, forums, and other features. Ransomware attacks are customized based on target network configurations, even if the ransomware payload is the same. They can take the form of data exfiltration and other impacts. Because of the interconnected nature of the cybercriminal economy, seemingly unrelated intrusions can build upon each other. For example, infostealer malware steals passwords and cookies. These attacks are often viewed as less serious, but cybercriminals can sell these passwords to enable other, more devastating attacks. However, these attacks follow a common template. First comes initial access via malware infection or exploitation of a vulnerability. Then credential theft is used to elevate privileges and move laterally.


7 Microservice Design Patterns To Use

Saga pattern - This microservice design pattern provides transaction management using a sequence of local transactions. Each operation part of a saga guarantees that all operations are complete, or that the corresponding compensation transactions are run to undo the previously done work. Furthermore, in Saga, a compensating transaction should be retriable and idempotent. The two principles ensure that transactions can be managed without manual intervention. The pattern is also a way of managing data consistency across microservices in distributed transaction instances. ... Event Sourcing - Event sourcing defines an approach to handling data operations driven by a sequence of events, each of which is recorded in an append-only store. The app code sends a series of events that describe every action that happened on the data to the event store. Typically, the event store publishes these events so consumers can be notified and handle them if required. For instance, consumers could initiate tasks that apply the events operations to other systems or do any other action associated needed to complete an operation. 


Enterprises embrace SD-WAN but miss benefits of integrated approach to security

When asked to list the challenges they faced when taking a do-it-yourself (DIY) approach to SD-WAN, respondents cited difficulties related to hiring and retaining a skilled in-house workforce, keeping up with technology developments and the ability to negotiate favourable terms with technology vendors. “Now that SD-WAN has matured and has been widely adopted, the complexity of deployments has grown, challenging enterprises on multiple fronts and compromising their ability to realise the full benefits of the technology,” said James Eibisch, research director, European infrastructure and telecoms, at IDC, commenting on the study. “Enterprises are increasingly reliant on the resources and expertise of a managed service provider to ensure they deploy SD-WAN in a way best suited to their meet their organisations’ objectives. Security approaches like secure access service edge (SASE) that combine the benefits of SD-WAN with zero-trust network access and content filtering features are well poised to dominate the next phase of SD-WAN enhancements as enterprises continue to enable the cloud IT model and a hybrid workforce,” he added.


Quantum computing: Should it be on IT’s strategic roadmap?

Quantum computing is a nascent field. Few companies are planning to purchase quantum computers, but there are companies that are starting to use them for competitive advantage. For this reason alone, quantum computing should have a place on IT strategic roadmaps. Financial services institutions like banks and brokerage houses are beginning to experiment with quantum computing as a way to process large volumes of financial transactions quicker. Quantum computing can also be used for financial risk analysis, as financial services companies are using quantum computing for fraud detection. Quantum computing can be used to determine worldwide supply chain risks such as weather, strikes and political unrest, with an eye toward eliminating supply chain bottlenecks before they happen. Pharmaceutical companies are experimenting with quantum computing as a way to assess the viability of new drug combinations and their beneficial and adverse effects on humans. The goal is to reduce R&D costs and speed new products to market. They are also to customize drugs to each individual patient’s situation.


Big Tech Layoffs: A Flood of Talent vs the Hiring Crisis

There has been a sea change in the prospects certain big tech players anticipated would continue to buoy their sector. Sachin Gupta, CEO of HackerEarth, says many big tech and social media platforms saw explosive growth when the pandemic changed spending patterns and drove moves to work remotely and conduct more activities online. “What the businesses started thinking was this was going to last forever, which is very natural,” he says. It is very difficult to be in the midst of such a wave, he says, and then predict that it would not continue. The reasons behind the recent layoffs and firings differ, of course. Meta’s troubles include not seeing expected traction -- such as its exploration of the metaverse. Meanwhile, Twitter is in the throws of a regime change that has been acrimonious for at least some of the rank and file of the company, which has seen sweeping layoffs, resignations, and outright firings of personnel new CEO Elon Musk no longer wanted to darken the company’s door -- office doors that Musk abruptly ordered to be shut (temporarily) and locked last week even to remaining employees.


Creating an SRE Practice: Why and How

The most important first step is to adopt the SRE philosophies mentioned in the previous section. The one that will likely have the fastest payoff is to strive to eliminate toil. CI/CD can do this very well, so it is a good starting point. If you don't have a robust monitoring or observability system, that should also be a priority so that firefighting for your team is easier. ... You can't boil the ocean. Everyone will not magically become SREs overnight. What you can do is provide resources to your team (some are listed at the end of this article) and set clear expectations and a clear roadmap to how you will go from your current state to your desired state. A good way to start this process is to consider migrating your legacy monitoring to observability. For most organizations, this involves instrumenting their applications to emit metrics, traces, and logs to a centralized system that can use AI to identify root causes and pinpoint issues faster. The recommended approach to instrument applications is using OpenTelemetry, a CNCFsupported open-source project that ensures you retain ownership of your data and that your team learns transferable skills.


The Challenge of Cognitive Load in Platform Engineering

You must never forget that you are building products designed to delight their customers - your product development teams. Anything that prevents developers from smoothly using your platform, whether a flaw in API usability or a gap in documentation, is a threat to the successful realisation of the business value of the platform. With this lens of cognitive load theory, delight becomes a means of qualifying the cognitive burden the platform is removing from the development teams and their work to accomplish their tasks. The main focus of the platform team, as described by Kennedy, is "on providing “developer delight” whilst avoiding technical bloat and not falling into the trap of building a platform that doesn’t meet developer needs and is not adopted." She continues by noting the importance of paved paths, also known as Golden Paths: By offering Golden Paths to developers, platform teams can encourage them to use the services and tools that are preferred by the business. 



Quote for the day:

"Leadership is familiar, but not well understood." -- Gerald Weinberg

Daily Tech Digest - November 20, 2022

AI experts question tech industry’s ethical commitments

For Wachter, the “cutting costs and saving time” mindset that permeates AI’s development and deployment has led practitioners to focus almost exclusively on correlation, rather than causation, when building their models. “That spirit of making something quick and fast, but not necessarily improving it, also translates into ‘correlation is good enough – it gets the job done’,” she says, adding that the logic of austerity that underpins the technology’s real-world use means that the curiosity to discover the story between the data points is almost entirely absent. “We don’t actually care about the causality between things,” says Wachter. “There is an intellectual decline, if you will, because the tech people don’t really care about the social story between the data points, and social scientists are being left out of that loop.” She adds: “Really understanding how AI works is actually important to make it fairer and more equitable, but it also costs more in resources. There is very little incentive to figure out what is going on [in the models].” Taking the point further, McQuillan describes AI technology as a “correlation machine” that, in essence, produces conspiracy theories.


The Next Generation of Supply Chain Attacks Is Here to Stay

As the vast majority of the workforce has gone digital, organizations' core systems have been moving to the cloud. This accelerated cloud adoption has exponentially increased the use of third-party applications and the connections between systems and services, unleashing an entirely new cybersecurity challenge. There are three main factors that lead to the rise in app-to-app connectivity: Product-led growth (PLG): In an era of PLG and bottom-up software adoption, with software-as-a-service (SaaS) leaders like Okta and Slack; DevOps: Dev teams are freely generating and embedding API keys in. Hyperautomation: The rise of hyperautomation and low code/no code platforms means "citizen developers" can integrate and automate processes with the flip of a switch. The vast scope of integrations are now easily accessible to any kind of team, which means time saved and increased productivity. But while this makes an organization's job easier, it blurs visibility into potentially vulnerable app connections, making it extremely difficult for organizational IT and security leaders to have insight into all of the integrations deployed in their environment, which expands the organization's digital supply chain.


Cultivating social emotional learning in the metaverse

Interactions and learning trigger feelings and emotions. There is a need to develop emotional awareness, to pause and notice the emotional signals of the body. The practice of pause – the conscious allotting of space and time to look inwards and notice physical sensations like a ‘racing pulse’, a ‘shaking leg’ or a ‘clammy hand’ is a must for well-being. When things seem to be falling apart, it is useful to breathe. Evidence suggests that, by counting our breaths and centring our breathing, we calm our minds. Whether dealing with difficult conversations with colleagues, family, friends, teachers or students, the ability to regulate emotion and attention is a well-being practice proven to mitigate accompanying anxiety, fear, anger or despair. ... Feeling a pit in one’s stomach or a thumping heart are physical symptoms that often accompany intense emotional responses. At such times, a friend; app; conscious trained practice like counting numbers, breaths or tiles on the floor; time-out or break; or walking can all be good ways to physically distract focus and allow some of the intensity of the emotion to diminish.


How Much Automation Is Too Much?

For many forms of automation, deskilling isn’t a serious problem. Knowledge workers in general, including ops personnel, may face many routine, repeatable tasks in their day-to-day work that don’t require a level of skill that would cause an issue if that skill were lost. All such routine tasks are subject to automation without concern. At the other extreme, organizations may aspire to "lights out" production environments, so fully automated that there’s no reason to keep the lights on, because there are no people on duty. Any organization with such a lights out environment is likely to lose any staff who might be able to fix something if it goes wrong, either via deskilling or attrition. As AI-based automation becomes increasingly sophisticated, therefore, organizations will reach some optimal point where the advantages of automation sufficiently balance any disadvantages. Finding this optimum depends upon the people involved — the skilled workers who must somehow accommodate automation in their day-to-day work. Be sure to listen to the senior-level people who are adept at analogizing. They can solve problems that automation will never be able to solve.
 

Bringing a Product Mindset into DevOps

A product mindset is about delivering things that provide value to our users, within the context of the organisation and their strategy, and do so sustainability (i.e. balancing the now and the future). For the purpose of this article, I will use product thinking, product mindset and product management very much interchangeably. ... In practice this means achieving product-market-fit by balancing what our users need, want and find valuable (desirability), what we need to achieve (and can afford) as an organisation (viability) and what is possible technically, culturally, legally, etc (feasibility), and doing this without falling into the trap of premature optimisation or closing options too early. To give a tiny, very specific, but quite telling example: for the medical device organisation we chose Bash as scripting language because the DevOps lead was comfortable with it. Eventually we realised that the client’s engineers had no Bash experience, but as a .Net shop were far more comfortable with Python. Adding a user-centric approach which is part of a product mindset at an early stage would have prevented this mistake and the resulting rework.


Solving brain dynamics gives rise to flexible machine-learning models

Last year, MIT researchers announced that they had built “liquid” neural networks, inspired by the brains of small species: a class of flexible, robust machine learning models that learn on the job and can adapt to changing conditions, for real-world safety-critical tasks, like driving and flying. The flexibility of these “liquid” neural nets meant boosting the bloodline to our connected world, yielding better decision-making for many tasks involving time-series data, such as brain and heart monitoring, weather forecasting, and stock pricing. But these models become computationally expensive as their number of neurons and synapses increase and require clunky computer programs to solve their underlying, complicated math. And all of this math, similar to many physical phenomena, becomes harder to solve with size, meaning computing lots of small steps to arrive at a solution. Now, the same team of scientists has discovered a way to alleviate this bottleneck by solving the differential equation behind the interaction of two neurons through synapses to unlock a new type of fast and efficient artificial intelligence algorithms. 


4 Examples of Microservices Architectures Done Right

Microservices are everywhere in today’s increasingly virtual, decentralized world. 85% of organizations with 5,000 or more employees use microservices in their organization in some capacity as of 2021. Even more tellingly, 0% report having no intention of adopting microservices in their company. Clearly, microservices are here to stay, meaning more and more businesses will be adopting microservices in the coming months and years. This is good news, as microservices are capable of so much. This popularity comes with its own risks, though. Some businesses that integrate microservices into their existing workflow will need help figuring out what to do with them. ... Uber would not be able to exist if not for microservices. Although very brief, their monolithic structure resulted in insurmountable hurdles to their growth. Without microservices, the ride-sharing app couldn’t fix bugs when they occurred, develop or launch new services, or transition to a global marketplace. Their monolithic structure was prohibitively complex, requiring developers to have extensive experience working with the system to make even simple changes.


Chief engineering officer: A day in the life

It’s easy to get caught up in the tactical work, so I make sure to reserve time for high-level plans and strategy. This means following tech news and blogs to stay abreast of the latest in technology trends, keeping an eye on market news, reading the latest analyst research, meeting ups with my peers in our private equity portfolio companies, and more. It’s important for me not just to be on top of the technology, but to understand where we’re taking the business in the future. This kind of thinking is what helped lead to Gartner positioning Boomi as a Leader in the 2021 Gartner Magic Quadrant for Enterprise Integration Platform as a Service (EiPaaS) for eight consecutive years. ... If you’re thinking of getting into software engineering, here’s my advice: Just do it. It’s a high-demand career, and there is a continual lack of strong talent in the industry. You’ll find almost limitless opportunities once you get started. And there has never been a better time to do so, with many technological advancements that have lowered barriers to entry. As you move into management, though, it’s important to remember that your job is no longer to code – it’s to satisfy customer requirements and meet business goals. 


5 unexpected ways to improve your architecture team

A key consideration for large organizations is ensuring that no major component can function with complete autonomy or in a silo. You don't want a squad to be incapable of getting things done without any input from other squads. Quite the contrary, for minor decisions or implementations, each squad is empowered to move as quickly as possible. For major decisions with little or no recourse to make changes later, collaboration is key to the "measure twice, cut once" approach to critical decision-making. This enforced "checks and balances" means that no one chapter can unilaterally stray too far outside our strategic bounds. At a lower level, the Delivery Management squad team members work within other squads but all report to the same manager. Embedding them within other teams helps make every squad's activities as consistent as possible. This allows minimal impact on sprints when, for example, a delivery manager is out on leave, because process alignment is a goal within the Delivery Management function. 


Why companies can no longer hide keys under the doormat

CIOs need to be asking questions to their teams to assess this potential exposure and understand the risk, as well as putting plans in place to address it. Fortunately, recent breakthroughs have been able to eliminate this encryption gap and maintain full protection for private keys. Leading CPU vendors have added security hardware within their advanced microprocessors that prevents any unauthorized access to code or data during execution or afterwards in what remains in memory caches. The chips are now in most servers, particularly those used by public cloud vendors, involving a technology generally known as confidential computing. This “secure enclave” technology closes the encryption gap and protects private keys, but it has required changes to code and IT processes that can involve a significant amount of technical work. It is specific to a particular cloud provider (meaning it must be altered for use in other clouds) and complicates future changes in code or operational processes. Fortunately, new “go-between” technology eliminates the need for such modifications and potentially offers multi-cloud portability with unlimited scale.



Quote for the day:

"The leadership team is the most important asset of the company and can be its worst liability" -- Med Jones

Daily Tech Digest - November 18, 2022

How connected automation will release the potential of IoT

Connected automation is an industry-first, no-code, highly secure, software-as-a-service layer that IoT devices can easily connect to. It intelligently orchestrates multi-vendor software robots, API mini-robots, AI, and staff: all operating together in real-time as an augmented digital workforce. It’s a hyper-productive digital workforce delivering high-speed, data-rich, end-to-end processes that enable IoT devices to instantly inter-communicate and securely work with physical and digital systems of all ages, sizes, and complexities – at scale. So, for the first-time, investments in IoT can deliver their true potential, but without huge investments in changing existing systems. ... So, when human judgement is required, handoffs arrive via robot-created, sophisticated, intuitive, digital user interfaces – all in real-time. Where augmented insights are instantly required within IoT initiated processes, AI or other smart tools are used to escalate with predictive analysis and problem-solving capabilities, in real-time. And once decisions are made, by people or AI, they can immediately be actioned, yet without having to make major changes to existing systems or processes.


Internet Outages Could Spread as Temperatures Rise. Here's What Big Tech Is Doing

We need data centers to be close to populations, but that means their climatological impact is local, too. "If we don't address climate change, we really will be toast," former Google CEO and chairman Eric Schmidt told CNBC in April. He left the tech giant in 2017 to launch his own philanthropic firm to support research in future-looking fields -- and found climate change harder to ignore. "We really are putting the jeopardy of our grandchildren, great-grandchildren and great-great-grandchildren at risk." Experts say that data centers can be built to be kinder to the climate. But it's going to be tough to pull off. When selecting a site for their data centers, companies like Microsoft and Amazon prioritize access to low-cost energy, which they've historically found in places like Silicon Valley, northern Virginia and Dallas/Fort Worth, though Atlanta and Phoenix have been growing. They also look for internet infrastructure from telecoms AT&T, Verizon and CenturyLink, along with fiber providers like Charter and Comcast, to keep data flowing. 


Google AI — Reincarnating Reinforcement Learning

To overcome inefficiencies of the tabula rasa RL, Google AI introduces Reincarnating RL — an alternative approach to RL research, where prior computational work, such as learned models, policies, logged data, etc., is reused or transferred between design iterations of an RL agent or from one agent to another. Some sub-areas of Reinforcement Learning leverage prior computation, whereas most of the RL agents are largely trained from scratch. Until now, there has been no broader effort to leverage prior computational work for the training workflow in RL research. The code and trained agents have been released to enable researchers to build on this work. Reincarnating RL is a more efficient way to train RL agents than training from scratch. This can allow for more complex RL problems to be tackled without requiring excessive computational resources. Furthermore, RRL can enable a bench-marking paradigm where researchers continually improve and update existing trained agents. The real-world RL use cases will likely be in the domains where prior computational work is available.


Best practices for bolstering machine learning security

Given the proliferation of businesses using ML and the nuanced approaches for managing risk across these systems, how can organizations ensure their ML operations remain safe and secure? When developing and implementing ML applications, Hanif and Rollins say, companies should first use general cybersecurity best practices, such as keeping software and hardware up to date, ensuring their model pipeline is not internet-exposed, and using multi-factor authentication (MFA) across applications. After that, they suggest paying special attention to the models, the data, and the interactions between them. “Machine learning is often more complicated than other systems,” Hanif says. “Think about the complete system, end-to-end, rather than the isolated components. If the model depends on something, and that something has additional dependencies, you should keep an eye on those additional dependencies, too.” Hanif recommends evaluating three key things: your input data, your model’s interactions and output, and potential vulnerabilities or gaps in your data or models.


How To Be Crypto-Agile Before Quantum Computing Upends The World

To be crypto-agile means to be able to make cryptographic changes quickly and without the burden of massive projects. That means adopting tools and technologies that abstract away underlying cryptographic primitives and that can change readily. To be crypto-agile is to acknowledge that change is on the horizon and that anything built today needs to be able to adapt to coming changes. Smart organizations are already updating existing systems and forcing crypto-agility requirements for all new projects. This is an opportunity for security teams to re-examine not just what algorithms they are using but also their data protection strategies in general. Most data today is “protected” using transparent disk or database encryption. This is low-level encryption that makes sure the bytes are scrambled before they hit the disk but is invisible while the machine is on. Servers stay on around the clock. A better approach is to use application-layer encryption (ALE). ALE is an architectural approach where data is encrypted before going to a data store. When someone peeks at the data in the data store, they see random bytes that have no meaning without the correct key.


What Happens if Microservices Vanish -- for Better or for Worse

The modern cloud has really accelerated the move towards those architectures. There’s benefits and drawbacks to those architectures. There’s a lot more moving pieces, a lot more complexity, and yet microservices offers a way to tame some of the complexity by putting services behind API boundaries. Amazon was very famous in the early days because Jeff Bezos required the way teams communicate is through APIs. That created this notion that each team was running a different service and the service was connected through software; APIs, not human beings. That helps different teams move independently and codify the contract between the teams, and yet there is no question that it can be massively overdone and can be used as a tool to sweep complexity under the rug and pretend it doesn’t exist. Once it’s behind an API, it’s easy to just set it and forget it. The reality is, I see companies with thousands of microservices when they probably should have had five. It can definitely be overdone, but a spectrum is the way I think of it.


IT leaders meet the challenge to innovate frugally

When CIOs undertake this exercise, Sethi says, “they should ensure that their biases and preferences are kept at bay. For instance, if an IT leader wants to upgrade a system but the analysis shows it is not critical from a business, technology, or risk perspective, it should be deferred.” This approach helps CIOs prioritize spend. “At the end of the exercise, technology leaders may finally come up with 50% budget for vital initiatives, 30% for essential projects, and the balance 20% for desirable initiatives.” With budgets locked, at whatever levels, CIOs will get the clarity to take up and sustain innovative implementations accordingly. ... According to Singh, “one of the most challenging aspects of innovating with budget constraints is to find a vendor who is willing to customize and develop at a low cost. The second was to find team members who were ready to toil hard to run and test the scenarios in real time.” “We offered an attractive proposition to the partner company ­— it was free to sell the developed solution to other customers. The partner found it compelling enough to work for us virtually free of cost...” he says.


How Cisco keeps its APIs secure throughout the software development process

To wrangle the complexity of the API landscape and make it more secure, Cisco adopted a “shift-left” strategy, incorporating security earlier into the software development process. “Shift-left security is really about prioritizing security and bringing it to the top of mind in the day-to-day work of a developer so they can harden their code and [decrease] the threats from cyberattacks,” Francisco says. An API-for-an-API, a solution for which Cisco won a 2022 CSO50 award, weaves security into the end-to-end cycle for enterprise API services. The tool helps from code development to deployment, live tracks APIs’ security posture while the application is in production and integrates with API gateways. The solution tests API interfaces against Cisco’s security policies. The end-to-end solution is meant for both developers and DevSecOps professionals. “From a cultural perspective, we have a lot of work left to do to break down the silos between these groups, because they speak a different language and they’re looking at different data points,” Francisco says.


Zero trust – what is it and why is strong authentication critical?

Zero trust was developed as a response to the new realities of our digital world. Enterprises must grapple with the challenge of authenticating employees in today’s hybrid/remote economy. Gartner predicts that an estimated 51 per cent of knowledge workers were remote by the end of 2021 and a Microsoft study found that 67 per cent of employees bring their own device. Zero trust accommodates these modern network realities, including remote users, BYOD, and cloud-based assets which are not located within an enterprise-owned network boundary. A perimeter-focused security approach does little to combat insider threats, which are one of the most serious sources of breaches today. ... Since a zero-trust model assumes a network is always at risk of being exposed to threats and requires all users and all devices be authenticated and authorised, authentication plays a huge role in a zero-trust ecosystem. Zero Trust Architecture is centred around identity and data, as the goal of implementation is to protect access to data by specific, authorised identities dynamically. 


In a data-led world, intuition still matters

Defining the problem first and then working backward toward the data can put you in some good company. The authors cite the example of Amazon. There, when people have an idea for a new product or service, they have to write up a press release and FAQs to help them and everyone else understand what it is, how it will work, and how various contingencies would be handled. That process helps all parties gain insight into what they really need to know to determine if the scheme is a good idea. This sort of thing will help you focus on the right data. But you also need to make sure the data is right. Here, again, the authors have good advice: in both defining the problem and confronting the data, they emphasize the importance of asking powerful, probing questions. In particular, they recommend developing what they call “IWIK”—I Wish I Knew—questions designed to elicit data actually relevant to making a decision. All data, however obtained or elicited, must be rigorously interrogated. Is it accurate? Do means and medians mask explosive outliers?



Quote for the day:

"The minute a person whose word means a great deal to others dare to take the open-hearted and courageous way, many others follow." -- Marian Anderson

Daily Tech Digest - November 17, 2022

Why Cybersecurity Should Highlight Veteran-Hiring Programs

Veterans who did not work in cybersecurity while in the military still have valuable skills that they bring to the field, however. The military emphasizes teamwork, adaptability, and responsibility, all traits that security professionals need to have. Military personnel are also trained in careful decision-making under extreme pressure using the available information. ... Earlier this year, the White House National Cyber Workforce and Education Summit issued a call to action to increase cybersecurity education and training opportunities. One of the announcements was to encourage more apprenticeship programs to help develop and train the cybersecurity workforce. In the months since, there have been a number of initiatives from cybersecurity organizations, including the SANS Institute. Many colleges and universities have specific training programs to give veterans hands-on experience in various areas. Cybersecurity training platform Cybrary said this week it is partnering with VetSec, a community of over 3,300 veterans working in or transitioning into cybersecurity, and TechVets, a bridge service for moving veterans, service leavers, reservists, and their families into IT careers.


ESG and C: Does Cybersecurity Deserve Its Own Pillar in ESG Frameworks?

Thefts of personal information during a cybersecurity breach erode trust on the part of customers investors, employees and other stakeholders, demonstrating the link between cyber risk and social risk. The new disclosure and reporting requirements embedded in the Security and Exchange Commission’s latest regulations governing the oversight of cybersecurity underline the link between governance risk and cyber risk. All this evidence shows that either cybersecurity is already part of ESG, and, perhaps, a more appropriate abbreviation should be ESGC. Most enterprise risk management policies have already expanded their oversight from purely financial risk to these other areas, including cybersecurity. Cyber risk can be as harmful to a company’s reputation and value as any other ESG issue, and the damage is inflicted and experienced in much the same way. As cyberattacks increase in size and frequency, the direct and indirect damage to companies — including loss of customer confidence, reputational damage, potential impact on the stock price and possible regulatory actions or litigation — arguably touches all aspects of ESG.


Efficient data governance with AI segmentation

An effective and efficient technology is available to replace such archaic methods and reduce risk fast, at a fraction of the cost: artificial intelligence (AI) segmentation. With AI-based segmentation, we ascertain what attributes of a file point to it being more likely to contain sensitive data after scanning just a small statistical sample of files. This provides us with important information to prioritize our search for high-risk data. For example, are Word documents at a higher risk than PowerPoint presentations? Is there a particular folder that is more likely to contain sensitive data? Once we have our riskiest data highlighted, we can immediately start a full scan and remediation process, eliminating the highest risk as early in the process as possible. Thus, we have prioritized the remediation process to achieve the greatest risk reduction in the least amount of time. For example, suppose we have many terabytes of data broken up into chunks of 100 terabytes. To index or scan 100 terabytes at a time could require several months of work, and it takes even longer to go through all of it.


What Makes a Good Cybersecurity Professional?

A cybersecurity professional is, at their core, an analytical person who looks at a problem from multiple points of view and devises an approach to solve the problem. When doing so, they must collaborate with people from different backgrounds and functions to understand the problem in depth and in context. This requires good communication skills, unlike some other technology-heavy roles in which a specification (spec) is provided and the task is strictly to achieve the spec. Someone could be an analyst, a risk advisor, a banker or a human resources (HR) professional and they could still be considered a cybersecurity professional. Cybersecurity relies on an understanding of human behavior and on contextual transactions in different lines of business. For example, a banker knows the pitfalls of processes when it comes to banking-related operations. They can bring this wealth of knowledge to cybersecurity operations by gaining the right skill set to build their technological capability. Imagine this to be a role where a consultant envisions the strategy and the solution and builds the technological stack required to solve the problem.


Agility, business more important than cost, network management for IT teams

Half of IT teams rated end-to-end visibility as a top priority, while just under that number said multicloud software-defined networks (SDNs) were on their most-wanted list. The bottom line in this area, said Cisco, was that although an unpredictable world challenges IT organisations, it also presents new opportunities for those that use technology to support dynamic business needs. It said IT needs to adopt an agile, cloud-like operations model for everything it does, including network operations. Cisco also remarked that as endpoints and applications become more dispersed and distributed, network complexity multiplies. While adoption of public cloud is growing, 50% of workloads are still deployed on-premise, and as a result, most environments will continue to be a mix of public cloud, hosted, private cloud, edge and on-premise environments, it said. CloudOps and NetOps figured highly in both operational and organisational trends and there was greater alignment between the objectives of both. In all, 49% of CloudOps and 42% of NetOps respondents said security was their top motivation for using multiple clouds, and both said business performance, security and agility were top priorities.


Networking for remote work puts the emphasis on people, not sites

Many companies had to support work-from-home (WFH) during COVID, and most looked forward to having their staff back in the office. Most now tell me that some or all of the staff isn’t coming back, and that remote work is a given for at least some positions, likely for a very long time. That’s opened major questions about how these now-forever-roaming workers are connected to information resources and to each other. Didn’t we solve this already, with Zoom and Teams? Sort of. Collaborative video applications provide a reasonable substitute for meetings, but you still have the challenge of application access and information delivery. A bit over 80% of enterprises I’ve talked with say they need to make a remote worker look like they’re at their desk, and they need to be able to work as though they were as well. During lockdown, most companies said they relied on sending files and documents to workers. A few used SD-WAN technology to connect workers’ homes to the company VPN. The former strategy is very limiting and inefficient; you can’t replace checking account status online by sending around documents.


5 ways to find hidden IT talent inside your organization

Nobody knows the hidden IT talents of non-IT employees better than their managers and co-workers. At TruStone, business leaders and managers are open to recognizing employees with IT potential that could benefit both the employee’s career and the company. “We’re transparent that this would be a great person for [an IT] career progression, so maybe they should come into IT,” Jeter says. Jeter often discovers talent through his team’s product management consults inside the organization. “With a lot of scaled agile framework, we have product owners that sit outside of IT but within the business in areas like consumer lending, member services, or mortgages. We have technologies to align with them and they orchestrate the backlog” and other supporting duties, Jeter says. “They see what IT does, and we see what they do — and some of them want to come into IT.” IT scored a new team member recently after a product owner in operations worked with IT on a product management consult. He had been with the company for nine years and worked in training before business operations.


Not patched Log4j yet? Assume attackers are in your network, say CISA and FBI

The ubiquitous nature of Apache Log4j means it's embedded in a vast array of applications, services and enterprise software tools that are written in Java and used by organizations around the world, many of which rushed to apply the fixes. But despite the urgent messaging around the need to apply critical security updates, there are still organizations that haven't done so – meaning they're still vulnerable to any cyber criminals or other malicious hackers looking to exploit Log4j. Now CISA and the FBI have warned organizations with affected VMware systems that didn't immediately apply patches or workarounds "to assume compromise and initiate threat hunting activities". The cybersecurity advisory (CSA) also warns any organizations that detect a compromise as a result of Log4j to "assume lateral movement" by the attackers, investigate any connected systems and audit accounts with high privilege access. "All organizations, regardless of identified evidence of compromise, should apply the recommendations in the mitigations section of this CSA to protect against similar malicious cyber activity," said the alert.


Don’t get lost in the cloud: How to manage multiple providers

For organizations that take an ad hoc approach to multi-cloud, the risks include cost overruns from engaging several different services, Linthicum warns. “They don’t have a good handle on how the money’s being spent in a particular cloud provider, even within the primary provider,” he says. “And they’re getting huge cloud bills they didn’t expect, to the point that the boards of directors and CIOs and C-suites are starting to notice.” Security concerns are another reason to centralize cloud management. Organizations with three or four providers should have a security system that spans all of them so they can avoid juggling several separate dashboards, Linthicum suggests. “That’s where people get confused, and that’s how breaches occur.” Besides a plan for how the various providers work together, Linthicum says, it’s crucial to have a financial operations (FinOps) program that monitors and manages usage and costs. “Many enterprises don’t have that right now,” he notes. “They wait for their bill to show up and then figure out what went on, and they’re just running it on a spreadsheet where they’re not getting a true FinOps program in place.”


When, Why and How Facilitation Skills Help Scrum Teams

Stress levels run high when two vocal members in the team, Minal and Linda, clash during the Sprint Retrospective. Minal expresses her disappointment with how little progress the team made during the Sprint. She adds sarcastically that maybe it was because of how they decided to implement the work, which was originally Linda’s idea. Linda responds quickly in a defensive tone, and calls out that Minal is always quick to point out what’s wrong but doesn’t contribute many suggestions of her own. The Retrospective delves into a tense back and forth between them and the Scrum Master who is facilitating the Retrospective, firmly stops their argument. The Scrum Master moves the discussion to improvement ideas for the team, and many items around managing work in progress and exploring new technologies are suggested, but ultimately the team is stuck at an impasse of what to carry forward. The timebox ends and the Scrum Master calls an end to the Sprint Retrospective with no plans for the team on how to improve.



Quote for the day:

"Make heroes out of the employees who personify what you want to see in the organization." -- Anita Roddick