Daily Tech Digest - August 08, 2022

Can Data Collection Persist Amid Post-Roe Privacy Questions?

The Supreme Court decision pushed data privacy discussions to the forefront once more, says Christine Frohlich, head of data governance at Verisk Marketing Solutions. “Those of us who have been working in the data industry have been thinking about this for a long time,” she says. “The regulations we’re seeing in California, and now what we’re seeing in Colorado, Connecticut, Virginia, and Utah have made this a real hot topic within our industry.” Companies have a fundamental responsibility, Frohlich says, to protect consumer privacy to the best of their ability. Customers may enjoy personalized experiences such as a digital interaction with a brand or having products marketed to them in a personal way, but she says they are also concerned about how their data is used. Federal legislation on data privacy might move forward faster in response to the Supreme Court decision, Frohlich says. The “right to be forgotten,” or a deletion requirement is flowing through state legislation and what is being proposed potentially on a federal perspective, she says. 


Can fintech innovation be a force for good social impact?

Putting purpose over profits requires fintech innovation to have some social purpose other than making money and just being a ‘good’ fintech, and we know that consumers are now actively looking for this purpose when choosing their financial institution. At the same time, modern consumers value experience over things and wish for fintechs to be more people-centric. Fintechs often create competitive advantage by being able to tailor offerings for niche markets. Consumers appreciate the personal approach, feel like they’re supporting positive change, and are increasingly looking for companies that align better with their values. If another financial institution does this in a better way, they won’t hesitate to switch providers. ... We know what makes a ‘good’ fintech, but a fintech that is a force for good needs to be reaching wider than the immediate financial communities needs. Fintechs can be innovative in their approaches and therefore have the ability and potential to help people in need. We’re already seeing examples of this where fintechs have encouraged financial inclusion,


Post-Quantum Safe Algorithm Candidate Cracked in an Hour on a PC

SIKE was among several algorithms that passed a NIST competition to identify and define standardized post-quantum algorithms. Because quantum computers represent a threat to current measures for securing information and data, the organization wanted to pinpoint algorithms that stood the best chance of withstanding attacks from quantum computers. In a blog post, Steven Galbraith, a University of Auckland mathematics professor and a leading cryptographic expert, explains how they accomplished the hack: “The attack exploits the fact that SIDH has auxiliary points and that the degree of the secret isogeny is known. The auxiliary points in SIDH have always been an annoyance and a potential weakness, and they have been exploited for fault attacks, the GPST adaptive attack, torsion point attacks, etc.” It’s not the end for SIKE. There may be ways to modify the algorithm to withstand these specific types of attacks. However, in an Ars Technica story, Jonathan Katz, professor in the department of computer science at the University of Maryland, said the news that a classical computer could crack an encryption scheme meant to be safe from quantum devices is troubling.


Infrastructure as a Code—Why Drift Management Is Not Enough

One of the most efficient ways of eliminating configuration drift is adopting infrastructure-as-code principles and using solutions such as Terraform. Instead of manually applying changes to sync the environments, which is inherently an error-prone process, you would define the environments using code. Code is clear, and is applied/run the same on any number of resources, without the risk of omitting something or reversing the order of some operations. By leveraging code versioning (e.g Git), an infrastructure-as-code platform also provides a detailed record, including both present and past configuration, which removes the issue of undocumented modifications and leaves an audit trail as an added bonus. Tools like Terraform, Pulumi, and Ansible are designed for configuration management and can be used to identify and signal drift, sometimes even correcting it automatically—so you get the chance of making things right before they have a real impact on your systems. As with any tool, the outcome depends on how you’re using it. Using a tool like Terraform does not make your company immune to configuration drift by itself.


Why Israel is moving to quantum computing

Quantum computing at scale is expected to revolutionize a range of industries, as it has the potential to be exponentially faster than classical computers at specific applications. Both China and the United States, among others, have already started national initiatives for this new paradigm. Israel launched its own initiative in 2018, for which in February it announced a $62 million budget. Israel is also placing bets on quantum. The Israel Innovation Authority (IIA) has selected Quantum Machines to establish its national Quantum Computing Center. It will host Israel’s first fully functional quantum computers for commercial and research applications. ... According to Quantum Machines, the Center’s computers will have a full-stack software and hardware platform capable of running any algorithm out of the box, including quantum error correction and multi-qubit calibration. As quantum computing is notorious for the various distinct approaches for creating qubits, the platform will also enable multiple qubit technologies, so that the center does not have to bet everything on one technology that perhaps may not turn out to be successful, which reduces the risk.


Meta is putting its latest AI chatbot on the web for the public to talk to

By releasing the chatbot to the general public, Meta wants to collect feedback on the various problems facing large language models. Users who chat with BlenderBot will be able to flag any suspect responses from the system, and Meta says it’s worked hard to “minimize the bots’ use of vulgar language, slurs, and culturally insensitive comments.” Users will have to opt in to have their data collected, and if so, their conversations and feedback will be stored and later published by Meta to be used by the general AI research community. “We are committed to publicly releasing all the data we collect in the demo in the hopes that we can improve conversational AI,” Kurt Shuster, a research engineer at Meta who helped create BlenderBot 3, told The Verge. ... Crucially, says Mary Williamson, a research engineering manager at Facebook AI Research (FAIR), while Tay was designed to learn in real time from user interactions, BlenderBot is a static model. That means it’s capable of remembering what users say within a conversation  but this data will only be used to improve the system further down the line.


Computer Science Proof Unveils Unexpected Form of Entanglement

A striking new proof in quantum computational complexity might best be understood with a playful thought experiment. Run a bath, then dump a bunch of floating bar magnets into the water. Each magnet will flip its orientation back and forth, trying to align with its neighbors. It will push and pull on the other magnets and get pushed and pulled in return. Now try to answer this: What will be the system’s final arrangement? This problem and others like it, it turns out, are impossibly complicated. With anything more than a few hundred magnets, computer simulations would take a preposterous amount of time to spit out the answer. Now make those magnets quantum—individual atoms subject to the byzantine rules of the quantum world. As you might guess, the problem gets even harder. “The interactions become more complicated,” said Henry Yuen of Columbia University. “There’s a more complicated constraint on when two neighboring ‘quantum magnets’ are happy.” These simple-seeming systems have provided exceptional insights into the limits of computation, in both the classical and quantum versions. 


What is a QPU and how will it drive quantum computing?

A QPU, aka a quantum processor, is the brain of a quantum computer that uses the behaviour of particles like electrons or photons to make certain kinds of calculations much faster than processors in today’s computers. QPUs rely on behaviours like superposition, the ability of a particle to be in many states at once, described in the relatively new branch of physics called quantum mechanics. By contrast, CPUs, GPUs and DPUs all apply principles of classical physics to electrical currents. That’s why today’s systems are called classical computers. ... Thanks to the complex science and technology, researchers expect the QPUs inside quantum computers will deliver amazing results. They are especially excited about four promising possibilities. First, they could take computer security to a whole new level. Quantum processors can factor enormous numbers quickly, a core function in cryptography. That means they could break today’s security protocols, but they can also create new, much more powerful ones. In addition, QPUs are ideally suited to simulating the quantum mechanics of how stuff works at the atomic level. 


What metaverse infrastructure can bring to hospitals

When it comes to the most prominent use cases being explored alongside Avanade in the aim to drive value, this depends on the operational areas and scenarios being mapped out, according to dos Anjos. It’s vital that implementation is conducted in line with the specific needs and goals of each client. “Today, the most common demands are for remote assistance, and guided assistance,” he said. “Another common scenario being looked at is helping doctors prep for surgery. This is aided via a surgical plane powered by Microsoft HoloLens, which includes a 3D projection of the patient and all the data necessary in one interface. Users can interact with this without needing to touch anything.” For Amaral, opportunities in this trending area of tech can also expand to education purposes, with medical school lecturers being able to record footage of surgeries for students to view and interact with. “This would provide a 360-degree view of the surgery and allow students to zoom in and out where needed,” he explained. “This makes for a more immersive experience for medical students.”


IT leadership: You gotta have H.E.A.R.T.

Today’s best digital leaders have adapted their leadership playbooks for the times. If you go back and listen to the Tech Whisperers podcast episodes, you’ll hear the same themes and the same leadership wisdom over and over again. What’s the common denominator? Humility, Empathy, Adaptability, Resilience, and Transparency: H.E.A.R.T. There’s something palpable in how the CIOs I've spoken to balance high EQ leadership, holding people accountable, having the hard conversations, and delivering results. These are business-first executives who anticipate, innovate, and drive results – and they don’t get distracted by bright shiny objects. ... “As a leader, it’s important to understand that no one person, group, or culture has all the knowledge, skills, or information necessary for success in business,” Smith says. “That’s why at ELC we always say that people are our greatest asset. Diversity – not just in backgrounds but in experiences and perspectives – results in greater innovation and better problem-solving across an organization.”



Quote for the day:

"Management is efficiency in climbing the ladder of success; leadership determines whether the ladder is leaning against the right wall." -- Stephen Covey

Daily Tech Digest - August 05, 2022

Auto Industry at Higher Risk of Cyberattacks in 2023

Connected cars are one of the most significant factors driving these risks. These vehicles feature connectivity and include autonomous features, so attackers have more potential entry points and can do additional damage once inside. Self-driving vehicle sales could reach 1 million units by 2025 and skyrocket after, so these risks will grow quickly. Automakers also face risks from connected manufacturing processes. This trend has emerged in other sectors that have embraced IT/OT convergence. One-quarter of energy companies reported weekly DDoS attacks after implementing Industry 4.0 technologies. Their attack surfaces will increase as car manufacturers likewise implement these systems. ... One of the most important changes to make is segmenting networks. All IoT devices should run on separate systems from more sensitive endpoints and data to prevent lateral movement. Encrypting IoT communications and changing default passwords is also crucial. Manufacturers should update these systems regularly, including using updated anti-malware software. 


Why Developers Need a Management Plane

A management plane empowers line developers to accomplish all of this without having a deep understanding or mastery of how to work native data plane configuration files and policies for firewalls, networking, API management and application performance management. With the management plane, platform ops teams can reduce the need for developers to build domain-specific knowledge outside the normal realm of developer expertise. For example, a management plane can have a menu of options or decision trees to determine what degree of availability and resilience an application requires, what volume of API calls can be issued against an app or service or where an app should be located in the cloud for data privacy or regulatory reasons. Equally important, the management plane can improve security by providing developers smart recommendations on good security practices or putting in place specific limits on key resources or infrastructure to ensure that developers shifting left don’t inadvertently expose their organization to serious risk.


Tech hiring enters the Big Freeze

Google and Microsoft are not the only tech companies that have started to take a more cautious approach to hiring. Earlier this year, Twitter initially issued a hiring freeze, then laid off 30% of its talent acquisition team earlier this month. At the end of June, Meta CEO Mark Zuckerberg was hostile on a call with employees, saying that “realistically, there are probably a bunch of people at the company who shouldn’t be here.” A month later, the company’s Q2 2022 financial results showed its first ever decline in revenue, with Zuckerberg telling investors that the economic climate looked even graver than it did the previous quarter. Around the same time, Apple also announced that, while the company will continue to invest in product development, it will no longer increase headcount in some departments next year. ... Research shows that employees want to be regularly offered training and the chance to develop new skills and are more likely to stay at a company if given those opportunities. The Great Resignation was a major topic of conversation in the first half of this year and, for companies that are no longer hiring, losing more employees is not an option.


Cybersecurity could offer a way for underrepresented groups to break into tech

It seems that given the sheer number of people needed in cybersecurity in the coming years could represent a way for historically underrepresented groups to find their way into tech. CJ Moses, CISO at AWS, spoke at the company keynote about the importance of diverse ways of thinking when it comes to keeping companies secure. “Another key part of our culture is having multiple people in the room with different outlooks. This could be introversion or extroversion, coming from different backgrounds or cultures, whatever enables your culture to be looking at things differently and challenging one another,” he said. He added that new ways of thinking can be transformative to cybersecurity teams. “I also think new hires can offer a team high levels of clarity because they don’t have years of bias or a group think baked into their mechanisms. So when you’re hiring, our best practices encourage being sensitive to the makeup of the interview panels, having multiple viewpoints and backgrounds, because diversity brings diversity.”


3 Things The C-Suite Should Know About Data Management And Protection

Ultimately, the massive increases in the three Vs have, by and large, resulted in inconsistent data management and protection policies in companies across the globe. So, traditional approaches to data management and protection are no longer sufficient. You need to be prepared to support empowering your IT department with the ability to meet today’s challenges. Consider solutions like autonomous data management, which uses AI-driven technology to fully automate self-provision, self-optimization and self-healing data management services for the vast amounts of data in the multi-cloud environments enterprises are migrating toward. ... The cloud makes a lot of sense for a lot of reasons. It’s flexible, with scalability and mobility; efficient, including its accessibility and speed to market; and cost-effective, as it includes pay-as-you-go models and helps eliminate hardware expenses. But it can be a fickle beast, especially in this ever-increasingly multi-cloud world. This refers to how enterprise data is being dispersed across on-premises data centers and the many private and public cloud service providers.


5 best practices for secure collaboration

What we have seen is that has rapidly changed now over the last couple of years as calling is still obviously very important, but other collaboration technologies have entered the landscape and have become equally, if not arguably, more important. And the first one of those is video. The challenges, when you think about securing video, obviously a lot of folks have heard about unauthorized people [discovering] a meeting and [joining] it with an eye toward potentially disrupting the meeting or toward snooping on the meeting and listening in. And that has, fortunately, been addressed by most of the vendors. But the other real concern that we have seen arise from a security and especially a compliance perspective is meetings are generating a lot of content. ... If you are a CSO, obviously you have ultimate responsibility for collaboration security. But you also want to work with the collaboration teams to either delegate ownership of managing day-to-day security operations to those folks or working with them to get input into what the risks are and what are the possible mitigation techniques. 


Build .NET apps for the metaverse with StereoKit

Developing with StereoKit shouldn’t be too hard for anyone who’s built .NET UI code. It’s probably best to work with Visual Studio, though there’s no reason you can’t use any other .NET development environment that supports NuGet. Visual Studio users will need to ensure that they’ve enabled desktop .NET development for Windows OpenXR apps, UWP for apps targeting HoloLens, and mobile .NET development for Oculus and other Android-based hardware. You’ll need an OpenXR runtime to test code against, with the option of using a desktop simulator if you don’t have a headset. One advantage of working with Visual Studio is that the StereoKit development team has provided a set of Visual Studio templates that can speed up getting started by loading prerequisites and filling out some boilerplate code. Most developers are likely to want the .NET Core template, as this works with modern .NET implementations on Windows and Linux and gets you ready for the cross-platform template under development. 


The Data science journey of Amit Kumar, senior enterprise architect-deep learning at NVIDIA

The most important thing for aspirants is to get the fundamentals right before diving into data science and AI. Having a basic but intuitive understanding of linear algebra, calculus, and information theory helps to get a faster grip. Aspiring data scientists should not ignore fundamental principles of software engineering, in general, because nowadays the market is looking for full-stack data scientists with the capability to build an end-to-end pipeline, rather than just being a data science algorithm expert. ... My biggest challenge, which ultimately turned into my biggest achievement, was to start from scratch and build a world-class center of excellence in data science at HP India along with Niranjan Damera Venkata, Madhusoodhana Rao and Shameed Sait. This challenge was turned into an achievement by going into the start-up mode within HP. Though we were part of a large organisation, we made sure that the center of excellence operates the way a successful startup works by inculcating the culture of mutual respect and healthy competition, attracting and hiring best talents, and providing freedom and flexibility.


Confidential Computing with WebAssembly

Confidential computing is of particular use to organizations that deal in sensitive, high value data — such as financial institutions, but also a wide variety of organizations. “We felt that confidential computing was going to be a very big thing be that it should be easy to use,” said Bursell, was then chief security architect in the office of Red Hat’s chief technology officer. “And rather than having to rewrite all the applications and learn how to use confidential computing, it should be simple.” But it wasn’t simple. Among the biggest puzzles: attestation, the mechanism by which a host measures a workload cryptographically and communicates that measurement to a third party. “One of the significant challenges that we have is that all the attestation processes are different,” said McCallum, who led Red Hat’s confidential computing strategy as a virtualization security architect. “And all of the technologies within confidential computing are different. And so they’re all going to produce different cryptographic caches, even if it’s the same underlying code that’s running on all.”


The Computer Scientist Challenging AI to Learn Better

The most successful method, called replay, stores past experiences and then replays them during training with new examples, so they are not lost. It’s inspired by memory consolidation in our brain, where during sleep the high-level encodings of the day’s activities are “replayed” as the neurons reactivate. In other words, for the algorithms, new learning can’t completely eradicate past learning since we are mixing in stored past experiences. There are three styles for doing this. The most common style is “veridical replay,” where researchers store a subset of the raw inputs — for example, the original images for an object recognition task — and then mix those stored images from the past in with new images to be learned. The second approach replays compressed representations of the images. A third far less common method is “generative replay.” Here, an artificial neural network actually generates a synthetic version of a past experience and then mixes that synthetic example with new examples. My lab has focused on the latter two methods. 



Quote for the day:

"The leadership team is the most important asset of the company and can be its worst liability" -- Med Jones

Daily Tech Digest - August 04, 2022

Artificial intelligence makes project planning better

Unlike neural networks, expert systems do not require up-front learning, nor do they necessarily require large amounts of data to be effective. Yes, expert systems can and do absolutely learn and get smarter over time (by adjusting or adding rules in the inference engine) but they have the benefit of not needing to be “trained up front” in order to function correctly. Capturing planning knowledge can be a daunting task and arguably very specific and unique to individual organizations. If all organisations planned using the same knowledge e.g., standard sub-nets, then we could simply put our heads together as an industry and establish a global “planning bible” from which we could all subscribe. This of course isn’t the case and so for a neural network to be effective in helping us in project planning, we would need to mine a lot of data. Even if we could get our hands on it, it wouldn’t be consistent enough to actually help with pattern recognition. Neural networks have been described as black boxes — you feed in inputs, they establish algorithms based on learned patterns and then spit out an answer.


Trending Programming Language to Learn

Generally, Python is seen as an easy-to-learn software language thanks to its simple syntax, extensive library of guidelines and toolkits, and its compatibility with other prominent programming languages, including C and C++. Its popularity is demonstrated by its ranking in TIOBE and PYPL Index in 2021: Python was the leading programming language. Companies like Intel, Facebook, Spotify, and Netflix are still using the best python developers to take advantage of this language’s extensive libraries. ... Although GO is somehow similar to C in syntax, it’s a unique language that offers excellent memory protection and management functionality. It is likely that GO’s popularity will continue to rise as it’s used to design systems that use artificial intelligence, just like Python. ... It's no wonder Java is so popular; getting started with this language is fairly simple, and it boasts a high level of security. Furthermore, it’s capable of handling large amounts of data. The technology is used in a variety of applications on almost all major systems like mobile, desktop, web, artificial intelligence, cloud applications, and more.


Why Microsoft Azure and Google Cloud love Ampere Computing’s server chips

“Ampere is designing for a completely different goal,” to its rivals, says Dylan Patel, a chip industry analyst from SemiAnalysis. “Intel is targeting a wider net of people, versus what Ampere is going for, which allows them to make certain sacrifices and gain benefits from that,” he says. Though the company’s chips have a “weaker and smaller CPU core” than some of the x86 processors designed by Intel and AMD for servers, Patel says this means the chips themselves are smaller, and that as a result power usage is lower and the chips are more efficient. “That’s a big deal” for data centre operators, he says. Patel adds: “Renee realised this needed to be done much earlier than most of the cloud providers themselves. Amazon figured out they needed to build CPUs like this, but the others did not, and as a result, most of them are now in a position where they need to buy the technology from someone else. That’s why you find Ampere in every major cloud now.”


Digital transformation: How to guide innovation leaders

Digital Trailblazers are lifelong learners and can start their careers in product management, technology, security, or data roles. What sets Digital Trailblazers apart is their ability to ask the questions that challenge people’s thinking and get into the weeds around customer experience, data quality issues, or how to integrate emerging technologies. But finding Digital Trailblazers isn’t easy, and guiding them requires leadership’s commitment to empowering their creativity and collaboration. CIOs who dedicate themselves and their lieutenants to seek and guide aspiring transformation leaders are setting their entire organization up for success for years to come. Once you identify these leaders, you must encourage them to step out of their comfort zones because many will soon be experiencing firsts such as presenting to leadership, responding to detractors, or making tough calls in setting priorities. In the book, I tell the stories of what it feels like to be a Digital Trailblazer, knowing they will face many new experiences. I’ll share an excerpt from the chapter “Buried in bad data.” 


6 key advantages of ERP and CRM software integration

Typically, businesses purchase and deploy ERP and CRM systems separately. However, if your ERP and CRM systems have their own databases, you will consistently have to worry about keeping them synchronised. Whether it’s a CRM user from customer service or an ERP user from billing who updates a customer’s account, any changes implemented in one system will have to be transferred to the other. Considering this is a manual process, having to wait for a database to update before you can, for example, process bills, replenish inventory levels and arrange product returns for customers, will result in slower operations and an increased risk of database errors. Applying an integrated CRM functionality to your ERP solution will ensure both systems share one database; meaning updates in either system are visible instantaneously. Customers can be billed faster and any product returns can be automated between systems; providing your business with clearer visibility into all stages of your business’ sales process.


Can You Recover Losses Sustained During a Cloud Outage?

Even if the providers do have insurance, the terms of those policies are unlikely to cover more than a fraction of the costs incurred by the clients. “Negotiate how much risk is being held by the company and how much risk is being retained by the cloud service provider,” advises Michael Phillips, chief claims officer of cyber insurance company Resilience. “It's an unfortunate fact of life right now that many of the major cloud service providers are willing to accept none of the risk of their own failure.” The public cloud is a multi-tenant environment, further complicating the issue of responsibility. “Many cloud providers currently do not offer meaningful SLAs, arguing the application must meet the demands of multiple customers,” says Lisa Rovinsky, partner at full-service law firm Culhane Meadows. “I think this power structure will be changing as customers become more sophisticated and hybrid cloud solutions develop.” This puts the onus on clients to ensure that their cloud agreements are as airtight as possible from the get-go. Boilerplate contracts are unlikely to offer even cursory protection, so customization is increasingly the name of the game.


6 Smart Ways to Optimize Your Tech Stack

Consolidation can be thought of through the lens of technical debt. While some technical debt may be intentional, all technical debt creates added complexity that gets in the way of organizational agility. Recent research found that while 94% of organizations recognize the impact of technical debt digital transformation, less than half have a strategy in place to manage it. Looking for ways to eliminate technical debt by consolidating solutions and letting go of those that are highly customized or out of support offers a clear path to delivering measurable business value. ... EA can align with DevOps in two complementary ways. First, the development tech stack must be reflected in the overall organizational tech stack. On this front, collaboration with head of development and software architects are key. Second, EA can use their tools and expertise to help dev teams manage their one tech landscape, particularly when it comes to microservices. A microservice catalog can serve in this case as both an essential tool for DevOps, particularly when it promotes reuse, and a natural extns


Hacking Concerns Delay Balloting for New UK Prime Minister

Online voting has been changed so that instead of a Tory party member being able to use their unique code multiple times to change their vote, the code will instead be deactivated after they initially vote. "The part that caused particular concern was being able to change your vote after submission," says Alan Woodward, professor of computer science at the University of Surrey. NCSC, which is the public-facing arm of Britain's security, intelligence and cyber agency, GCHQ, confirms that it has been providing guidance to the Tory party. "As you would expect from the U.K.'s national cybersecurity authority, we provided advice to the Conservative Party on security considerations for online leadership voting," an NCSC spokesperson tells Information Security Media Group. "Defending U.K. democratic and electoral processes is a priority for the NCSC, and we work closely with all parliamentary political parties, local authorities and MPs to provide cybersecurity guidance and support." The Conservative Party acknowledged the cybersecurity center's input.


How to succeed as an enterprise architect: 4 key strategies

Technical debt should be used intentionally to make incremental gains, not accumulated from neglect. Every architect must decide how to deal with debt—both addressing it and taking it on—to succeed in their role. Get comfortable with technical debt as a tool. The key questions that need to be addressed are when to take on debt and when to pay it off. Take on debt when the future of a product or feature is uncertain. Whether for a proof of concept or a minimum viable product, use debt to move fast and prove or realize value quickly before committing more cycles to make it robust. Architects can minimize the impact of debt by first solidifying interfaces. Changes to interfaces impact users. Consequently, these changes are not only sometimes technically difficult but can also be logistically challenging. And the more difficult it is to address debt, the less likely it is to be managed. Pay down technical debt when its value proposition turns negative and has a measurable impact on the business. That impact could come in the form of decreased engineering velocity, infrastructure brittleness leading to repeated incidents, monetary cost, or many other related effects. 


6 ways your cloud data security policies are slowing innovation – and how to avoid that

Some security professionals may consider this first pitfall as irrelevant to their organization, as they allow data to be freely moved or modified across cloud environments without restrictions. While beneficial for business purposes, this approach ignores the exponential growth in data and its tendency to spread across data stores and environments, with little ability to locate where it resides. This lack of visibility and control will inevitably lead to loss of what may be sensitive, personal or customer data in the process. If data is the fuel of many of our business processes, then losing some of it means that you’re running low on gas. Innovative teams require access to data. Whether it’s data scientists who are creating new machine learning algorithms, threat researchers researching new trends, marketing or product management teams who need to understand customer behavior or other stakeholders – innovating without data is like trying to bake without an oven. Managing organizational access to data may be critical to ensure that it isn’t abused or lost but creating stringent access control policies and boundaries around data usage creates what are essentially data silos, once again restricting innovation.



Quote for the day:

"Failures only triumph if we don't have the courage to try again." -- Gordon Tredgold

Daily Tech Digest - August 03, 2022

Why the future of APIs must include zero trust

Devops leaders are pressured to deliver digital transformation projects on time and under budget while developing and fine-tuning APIs at the same time. Unfortunately, API management and security are an afterthought when the devops teams rush to finish projects on deadline. As a result, API sprawl happens fast, multiplying when all devops teams in an enterprise don’t have the API Management tools and security they need. More devops teams require a solid, scalable methodology to limit API sprawl and provide the least privileged access to them. In addition, devops teams need to move API management to a zero-trust framework to help reduce the skyrocketing number of breaches happening today. The recent webinar sponsored by Cequence Security and Forrester, Six Stages Required for API Protection, hosted by Ameya Talwalkar, founder and CEO and guest speaker Sandy Carielli, Principal Analyst at Forrester, provide valuable insights into how devops teams can protect APIs. In addition, their discussion highlights how devops teams can improve API management and security.


India withdraws personal data protection bill that alarmed tech giants

The move comes as a surprise as lawmakers had indicated recently that the bill, unveiled in 2019, could see the “light of the day” soon. New Delhi received dozen of amendments and recommendations from a Joint Committee of Parliament that “identified many issues that were relevant but beyond the scope of a modern digital privacy law,” said India’s Junior IT Minister Rajeev Chandrasekhar. The government will now work on a “comprehensive legal framework” and present a new bill, he added. ... “The Personal Data Protection Bill, 2019 was deliberated in great detail by the Joint Committee of Parliament 81 amendments were proposed and 12 recommendations were made towards comprehensive legal framework on digital ecosystem. Considering the report of the JCP, a comprehensive legal framework is being worked upon. Hence, in the circumstances, it is proposed to withdraw. The Personal Data Protection Bill, 2019′ and present a new bill that fits into the comprehensive legal framework,” India’s IT Minister Ashwini Vaishnaw said in a written statement Wednesday.


Don't overengineer your cloud architecture

A recent Deloitte study uncovered some interesting facts about cloud computing budgets. You would think budgets would make a core difference in how businesses leverage cloud computing effectively, but they are not good indicators to predict success. Although this could indicate many things, I suspect that money is not correlated to value with cloud computing. In many instances, this may be due to the design and deployment of overly complex cloud solutions when simpler and more cost-effective approaches would work better to get to the optimized value that most businesses seek. If you ask the engineers why they designed the solution this way (whether overengineered or not), they will defend their approach around some reason or purpose that nobody understands but them. ... This is a systemic problem now, which has arisen because we have very few qualified cloud architects out there. Enterprises are settling for someone who may have passed a vendor’s architecture certification, which only makes them proficient in a very narrow grouping of technology and often doesn’t consider the big picture.


Leveraging data privacy by design

Privacy laws and regulations, therefore, can include guidelines for facilitating industry standards, benchmarks for privacy enhancing technologies and funding privacy by design research to incentivise technology designers to enhance privacy safeguard measures in the product designs; thereby promoting technological models that are privacy savvy. The above can be better understood from the following example. For instance, the price paid for a helmet by a motorbike rider is compliance cost as it is an additional purchase requirement for safety over and above his immediate need for using a bike as a tool for commutation. However, a seat belt that is subsumed as a component of a car and not an additional requirement is perceived differently by the owner. Thus, compliance requirements that are perceived as additional obligations result in the perception of increased compliance costs whereas compliance requirements embedded in the design of the product itself are considered as part of the total product price and not separate costs. Privacy by design can thus prompt a shift in a business model whereby through the incorporation of privacy features within the technological design of the product itself


Is it bad to give employees too many tech options?

The most important question in developing (or expanding) an employee-choice model is determining how much choice to allow. Offer too little and you risk undermining the effort's benefits. Offer too much and you risk a level of tech anarchy that can be as problematic as unfettered shadow IT. There isn’t a one-size-fits-all approach. Every organization has unique culture, requirements/expectations, and management capabilities. An approach that works in a marketing firm would differ from a healthcare provider, and a government agency would need a different approach than a startup. Options also vary depending on the devices employees use — desktop computing and mobile often require differing approaches, particularly for companies that employ a BYOD program for smartphones. ... Google is making a play for the enterprise by offering ChromeOS Flex, which turns aging PCs and Macs into Chromebooks. This allows companies to continue to use machines that have dated or limited hardware, but it also means adding support for ChromeOS devices. 


Patterns and Frameworks - What's wrong?

Many people say that we should prefer libraries to frameworks and I must say that might be true. If a library could do the job you need (for example, the communication between a client and a server I presented at the beginning of the article) and meets the performance, security, protocols and any other requirements your service needs to support, then the fact we can have a "Framework" automate some class generations for us might be of minor importance, especially if such a Framework will not be able to deal with the application classes and would force us to keep creating new patterns just to convert object types. ... Yet, they fall short when dealing with app specific types and force us to either change our types just to be able to work with the framework or, when two or more frameworks are involved, there's no way out and we need to create alternative classes and copy data back and forth, doing the necessary conversions, which completely defeats the purpose of having the transparent proxies.


Where are all the technologists? Talent shortages and what to do about them

Instead of looking for that complete match, shift to 80% instead – the other 20% can almost always be met through training, support and development once in the job. Another flexibility is around age. The most sought-after candidates are in the 35-49 age bracket. But don’t rule out the under-35s or the over-50s. There are brilliant people in both groups – one with all the potential for the future, the other with invaluable experience and work knowhow. This brings us to another absolutely key approach: to invest in training and upskilling. I have one client who is looking ahead and can see that they will have a significant software development skills requirement in about four years’ time. So they are training their existing software engineers now, so they can move into these roles when the time comes. There is a growing emphasis among digital leaders on increasing the amount of internal cross-training into tech. This is something that can be applied externally, too. Look outside the business for talent that can be supported into a tech career – people who may be in other fields right now but have the right aptitude, mindset and ambition.


We’re Spending Billions Each Year on Cybersecurity. So Why Aren’t Data Breaches Going Away?

As companies invest heavily in technology, communication, and training to reduce cybersecurity risk and as they begin seeing the positive impact of those efforts, they may let their guard down—not paying as much attention to the risks, not communicating as often, or failing to ensure that new employees (or employees in new positions) are receiving the information and training they need. Cybercrooks only need to be successful once to achieve their goals, but companies need to be successful 100% of the time to avoid being compromised. Consider this: security is subject to the same natural laws that govern the rest of the universe. Entropy is real… we move from order to chaos. ... A strong security culture is a must-have to combat the continuous threats that all companies are subject to. Employees’ security awareness, behaviors and the organization’s culture must be assessed regularly. Policies and training programs should be consistently updated to address the changing threat landscape. Failure to do so puts companies at risk of data theft, business interruption, or falling victim to ransomware scams.


What is supervised machine learning?

A common process involves hiring a large number of humans to label a large dataset. Organizing this group is often more work than running the algorithms. Some companies specialize in the process and maintain networks of freelancers or employees who can code datasets. Many of the large models for image classification and recognition rely upon these labels. Some companies have found indirect mechanisms for capturing the labels. Some websites, for instance, want to know if their users are humans or automated bots. One way to test this is to put up a collection of images and ask the user to search for particular items, like a pedestrian or a stop sign. The algorithms may show the same image to several users and then look for consistency. When a user agrees with previous users, that user is presumed to be a human. The same data is then saved and used to train ML algorithms to search for pedestrians or stop signs, a common job for autonomous vehicles. Some algorithms use subject-matter experts and ask them to review outlying data. Instead of classifying all images, it works with the most extreme values and extrapolates rules from them.


Machine learning creates a new attack surface requiring specialized defenses

While all adversarial machine learning attack types need to be defended against, different organizations will have different priorities. Financial institutions leveraging machine learning models to identify fraudulent transactions are going to be highly focused on defending against inference attacks. If an attacker understands the strengths and weaknesses of a fraud detection system, they can use that to alter their techniques to go undetected, bypassing the model altogether. Healthcare organizations could be more sensitive to data poisoning. The medical field has been some of the earliest adopters of using their massive historical data sets to predict outcomes with machine learning. Data poisoning attacks can lead to misdiagnosis, alter results of drug trials, misrepresent patient populations and more. Security organizations themselves are presently focusing on machine learning bypass attacks that are actively being used to deploy ransomware or backdoor networks. ... The best advice I can give to a CISO today is to embrace patterns we’ve already learned on emerging technologies.



Quote for the day:

"There are three secrets to managing. The first secret is have patience. The second is be patient. And the third most important secret is patience." -- Chuck Tanner

Daily Tech Digest - August 02, 2022

What Women Should Know Before Joining the Cybersecurity Industry

Women still are underrepresented in software engineering and IT. And many times, cybersecurity gets lumped together with those, and with that comes the belief that it requires the same skills. And that's simply not the case. At the core, the job of cybersecurity teams is to assess, prioritize, and work to resolve risks; nothing in there requires a STEM background or understanding of software engineering. Sure, these risks might related to code a developer wrote, or a cloud environment the IT team deployed, but reviewing alerts, assessing the impact to the business and the potential risk, and determining the appropriate course of action — those are not things that require a security professional to be a developer or to moonlight in IT. Computer science skills and backgrounds aren't a barrier to the cybersecurity profession — we're a business function, not a technical one. ... If you're on a cybersecurity team, you're tasked with keeping all these teams safe, each and every day. But this isn't something you can do alone. You need help from all of them in order to deliver that protection.


Overcoming the Top 3 Challenges of Infrastructure Modernization

Container environments like Kubernetes provide similar benefits and challenges as the cloud. Containers empower IT teams to increase efficiency, agility and speed, improving application life cycle management and making it faster and easier to modernize existing applications. Like the cloud, though, containers must be optimized to deliver on their ability to reduce costs and streamline performance. To orchestrate containers effectively, IT must understand how to allocate them. As with cloud provisioning, under-allocating container resources can result in issues with service assurance, while over-allocation can lead to wasted spending, especially since individual application teams tend to request more resources than they need to be safe. Right-sizing container environments is particularly important when containers are used to manage the impact of fluctuating business demands on IT systems. It’s crucial to optimize container environments for your current state, but it’s also important to know what’s coming so resources can be allocated accordingly.


Tracking Ransomware: Here's Everything We Still Don’t Know

ENISA estimates that during the timeframe it studied, there were 3,640 successful ransomware attacks, of which it was only able to obtain details for 623 incidents. "All results and conclusions as presented should take into account this disclaimer concerning the number of incidents used in this analysis" and highlight the overall lack of solid details about so many incidents, it says. "In addition, the fact that we were able to find publicly available information for [only] 17% of the cases highlights that when it comes to ransomware, only the tip of the iceberg is exposed and the impact is much higher than what is perceived," it says. Indeed, most attacks never get publicly reported, because victims don't want the negative publicity. Unfortunately, getting a victim to pay quickly and secretly suits ransomware-wielding attackers too. Law enforcement has a tough time identifying individual attackers or groups at work, prioritizing them based on impact, and issuing warnings to help other organizations block groups' commonly used tactics. 


Managing Kubernetes Secrets with the External Secrets Operator

ESO is a Kubernetes operator that connects to external secrets-management systems like the ones we mentioned above and reads secret information and injects the values into Kubernetes secrets. It is a collection of custom API resources that provide a user-friendly abstraction for the external APIs that manages the lifecycle of the secrets for us. Like all other Kubernetes operators, ESO is composed of some main components:Custom Resource Definitions (CRD): These define the data schema of the settings available for the operator, in our case SecretStore and ExternalSecret definitions. Programmatic Structures: These define the same data schema as the CRDs above using the programming language of choice, in our case Go. Custom Resource (CR): These hold the values for the settings defined by the CRDs and describe the configuration for the operator. Controller: This is where the actual work takes place. Controllers act on custom resources and are responsible for creating and managing the resources. They can be created in any programming language, and ESO controllers are created in Go.


Can artificial intelligence really help us talk to the animals?

Raskin is the co-founder and president of Earth Species Project (ESP), a California non-profit group with a bold ambition: to decode non-human communication using a form of artificial intelligence (AI) called machine learning, and make all the knowhow publicly available, thereby deepening our connection with other living species and helping to protect them. A 1970 album of whale song galvanised the movement that led to commercial whaling being banned. What could a Google Translate for the animal kingdom spawn? The organisation, founded in 2017 with the help of major donors such as LinkedIn co-founder Reid Hoffman, published its first scientific paper last December. The goal is to unlock communication within our lifetimes. “The end we are working towards is, can we decode animal communication, discover non-human language,” says Raskin. “Along the way and equally important is that we are developing technology that supports biologists and conservation now.” Understanding animal vocalisations has long been the subject of human fascination and study. 


Microsoft's new security tool lets you to see your systems like a hacker would

The attack surface management service could be useful given data that attackers start scanning the internet for exposed vulnerable devices within 15 minutes of a major flaw's public disclosure and generally continue scanning the internet for older flaws like last year's nasty Exchange Server flaws, ProxyLogon and ProxyShell. This service discovers a customer's unknown and unmanaged resources that are visible and accessible from the internet – giving defenders the same view an attacker has when they select a target. Defender EASM helps customers discover unmanaged resources that could be potential entry points for an attacker. Across MSTIC and Microsoft 365 Defender Research, Microsoft is tracking 250 different actors and ransomware families. "We're providing intelligence across all of them and bringing that into your security team — not just to learn the latest news… but also to explore it, so if I see an indicator, I might explore where that might live on the network and connect that to what I'm seeing in my company. It's like a workbench for analysts inside a company," says Lefferts.


Microsoft hails success of hydrogen fuel cell trial at its New York datacentre

The company deployed a proton exchange membrane (PEM) fuel cell technology at its Latham site, which generates electricity by facilitating a chemical reaction between hydrogen and oxygen that creates no carbon emissions whatsoever. “The PEM fuel cell test in Latham demonstrated the viability of this technology at three megawatts, the first time at the scale of a backup generator at a datacentre,” the blog post stated. “Once green hydrogen is available and economically viable, this type of stationary backup power could be implemented across industries – from datacentres to commercial buildings and hospitals.” The company first started experimenting with the use of PEM fuel cells as an alternative to diesel backup generators in 2018, having previously tested and ruled out the use of natural gas-powered solid oxide fuel cells on cost grounds. This work gave way to a collaboration between Microsoft and the National Renewable Energy Laboratory in 2018 that saw the pair deploy a 65 kW PEM fuel cell generator to power a rack of computers.


Legislators Gear Up to Take On Cloud Outages

The good news, if you’re in favor of this kind of regulation (or the bad news if you’re not) is that regulatory bodies across the Atlantic seem to be sliding towards a new compliance regime for cloud providers along these lines. A paper from the UK Treasury, published last month, revealed that Treasury and Bank of England have been mulling a new regulatory framework for “critical” cloud-based third-party services since 2019. They propose fairly broad powers to enforce standards and investigate violations. This isn’t legislation, of course; that step, the paper notes, will come “when parliamentary time allows,” and since Britain won’t have a government before September, we will likely be hearing more of this in 2023. Meanwhile, on the Continent, the European Council and Parliament came to an understanding in May that the (Digital Operational Resilience Act (DORA), a regulatory framework that is not yet in law, will be able to “maintain resilient operations through a severe operational disruption” in finance, including on cloud platforms. 


What transformational leaders do differently

A transformational leader actively listens and establishes trust with their team, encourages diversity of thought, and creates an environment where the team feels they “belong” and are comfortable sharing ideas without judgment. Effective change cannot happen without everyone working together against a common purpose, recognizing that a team is more important than any individual, and always putting the company first when making decisions. A leader must create an environment where team members feel seen, heard, and fully understand the company and department strategy and goals. As a multi-generational, family-owned business, Southern Glazer’s culture has an entrepreneurial spirit that challenges team members to think beyond the here and now, focusing on how we can do something better than before. Technology is business, and it is the responsibility of the IT team to bring innovative ideas that drive transformational change, to digitally transform across all company functions to create the right employee and business partner experience while also delivering operational efficiency and effectiveness.


Entrepreneurship for Engineers: Solo Founder or Co-Founder?

Founding a startup is hard, and it can be a lonely road, especially for solo founders. There are a lot of issues that come up in a startup that you can’t talk about with your employees, you can’t discuss with your investors, your friends won’t understand (unless they are also startup founders themselves) — and your spouse won’t get, either. “I was a founder, and I had a co-founder, and I can not thank God enough to have had that opportunity,” said Dokania. “It definitely makes it easier emotionally.” Raman echoed this sentiment. “It’s incredibly hard to build a company, and doing so while knowing that you are entirely responsible for the success or failure through that entire journey is exceptionally stressful,” she said. “The highs are very high, but the lows are so extremely low.” Many founders, especially early on, think of the advantage of a co-founder as being about finding someone with complementary skills, so you can build the business while each focusing on your strengths. However, Dokania and Raman agreed that the primary benefit of having co-founders is emotional — because humans are social animals and building a company is stressful enough without also being lonely and isolating.



Quote for the day:

"Leaders begin with a different question than others. Replacing who can I blame with how am I responsible?" -- Orrin Woodward

Daily Tech Digest - August 01, 2022

4 fundamental practices in IoT software development

One of the greatest concerns in IoT is security, and how software engineers address it will play a deeper role. As devices interact with each other, businesses need to be able to securely handle the data deluge. There have already been many data breaches where smart devices have been the target, notably Osram, which was found to have vulnerabilities in its IoT lightbulbs, potentially gifting an attacker access to a user’s network and the devices connected to it. Security needs to be tackled at the start of the design phase, making requirement tradeoffs as needed, rather than adding as a mere ‘bolt on’. This is highly correlated to software robustness. It may take a little bit more time to design and build robust software upfront, but secure software is more reliable and easier to maintain in the long run. A study by CAST suggests that one-third of security problems are also robustness problems, a finding that is borne out in our field experience with customers. Despite software developers’ best intentions, management is always looking for shortcuts. In the IoT ecosystem, first to market is a huge competitive driver, so this could mean that security, quality and dependability are sacrificed for speed to release. 


Accountability in algorithmic injustice

Often, journalists fixate on finding broken or abusive systems, but miss out on what happens next. Yet, in the majority of cases, little to no justice is found for the victims. At most, the faulty systems are unceremoniously taken out of circulation. So, why is it so hard to get justice and accountability when algorithms go wrong? The answer goes deep into the way society interacts with technology and exposes fundamental flaws in the way our entire legal system operates. “I suppose the preliminary question is: do you even know that you’ve been shafted?” says Karen Yeung, a professor and an expert in law and technology policy at the University of Birmingham. “There’s just a basic problem of total opacity that’s really difficult to contend with.” The ADCU, for example, had to take Uber and Ola to court in the Netherlands to try to gain access to more insight on how the company’s algorithms make automated decisions on everything from how much pay and deductions drivers receive, to whether or not they are fired. Even then, the court largely refused their request for information. Further, even if the details of systems are made public, that’s no guarantee people will be able to fully understand it either – and that includes those using the systems.


Data Mesh: To Mesh or not to Mesh?

Data Mesh allows teams to curate/generate data and create usable data products for other teams. It also makes certain that platform teams can put their efforts into data engineering while data professionals can handle domain-specific data issues. While business data professionals are responsible for the quality and reliability of the data their teams produce, they can take assistance from platform teams in the face of technical glitches. Apart from that, Data Mesh design is inclined towards business users and requires relatively minor interference from platform teams. This is unlike centralized data teams that are responsible for everything, from data frameworks and access to dealing with data-related requests. To conclude, Data Mesh or the decentralized architecture encourages each party to excel in their area of expertise. The platform teams need to focus on technology, engineering, and data pipelines, while the data professionals are accountable for ensuring data quality. This holistic approach ensures end-users can perform their tasks by leveraging data insights without investing time in acquiring the results of a custom request.


Chase CIO details what entry-level job-seekers need to succeed in Fintech

Never stop learning. The skills you mastered a few years ago may be no longer relevant today, which is why it’s important to be open to constantly learning. Whether you are starting your career or have years of experience, take it upon yourself to learn new skills and technologies. ... The skills required to be a technologist have evolved, but also the ways with colleagues across lines of business. One change we’ve really embraced as an organization is embarking on an agile and product transformation. We’ve taken advantage of the opportunity that came with the changing behaviors of consumers over the past few years to really embrace agile at a different scale. This matters tremendously, because when we deploy code or build an entirely new product, it helps millions of consumers reach their financial goals. The pace of change has accelerated, but the focus on making it easier for our customers to bank with Chase has not. Today, we’ve reorganized ourselves away from project-based teams into product-based teams. Each product now has a dedicated tech, product, design, and data & analytics leader to help speed up decision making and improve connectivity and collaboration.


Attacks using Office macros decline in wake of Microsoft action

"It's a hugely important step Microsoft is taking to start blocking these macros by default, especially due to how invisible macros are to the majority of users," adds Nathan Wenzler, chief security strategist at Tenable, a vulnerability scanning company. "But that doesn't mean the threat is eradicated or we shouldn't continue to remind users to be vigilant about opening files from untrusted sources." Other companies are seeing threat actors switching tactics because of Microsoft's move, too. "The adversaries are aware of it," observes Tim Bandos, executive vice president of cybersecurity at Xcitium, a maker of an endpoint security suite. "They're testing out new ways of working around it because they're clearly not as successful now that Microsoft has made this change." Users of one notorious malicious program, known as Emotet, have already begun shifting tactics, he notes. "We've seen them shift recently from leveraging macros to using URLs to OneDrive or Google Drive," he says.


Solana blockchain and the Proof of History

The consensus mechanism is a fundamental characteristic and differentiator among blockchains. Solana's consensus mechanism has several novel features, in particular the Proof of History algorithm, which enables faster processing time and lower transaction costs. How PoH works is not hard to grasp conceptually. It's a bit harder to understand how it improves processing time and transaction costs. The Solana whitepaper is a deep dive into the implementation details, but it can be easy to miss the forest for the trees. Conceptually, the Proof of History provides a way to cryptographically prove the passage of time and where events fall in that timeline. This consensus mechanism is used in tandem with another more conventional algorithm like the Proof of Work (PoW) or Proof of Stake (PoS). The Proof of History makes the Proof of Stake more efficient and resilient in Solana. You can think of PoH as a cryptographic clock. It timestamps transactions with a hash that guarantees where in time the transaction occurred as valid. This means the entire network can forget about verifying the temporal claims of nodes and defer reconciling the current state of the chain.


Why and How our AI needs to understand causality

Introducing causality to machine learning can make the model outputs more robust, and prevent the types of errors described earlier. But what does this look like? How can we encode causality into a model? The exact approach depends on the question we are trying to answer and the type of data we have available. ... They trained the model to ask “if I treat this disease, which symptoms would go away?” and “if I don’t treat this disease, which symptoms would remain?”. They encoded these questions as two mathematical formulae. Using these questions brings in causality: if treating a disease causes symptoms to go away, then it’s a causal relationship. They compared their causal model with a model that only looked at correlations and found that it performed better — particularly for rarer diseases and more complex cases. Despite the great potential of machine learning, and the associated excitement, we must not forget our core statistical principles. We must go beyond correlation (association) to look at causation, and build this into our models. 


Cyberattack prevention is cost-effective, so why aren’t businesses investing to protect?

To measure the success of an investment, you first need to quantify the cost of what you’re trying to protect. In a simplified model, the first step is to measure the given benefits of protection, this starts with an asset valuation. How valuable is this data to me? Those in charge of the budget need to execute the risk of that data not being protected. If I don’t take the necessary measures to mitigate the risk by investing in preventative cyber-security tools, how costly could this be when a breach occurs? It is more cost-effective to validate an organisation’s controls rather than spending money on more tools. By adopting specialised frameworks to counteract cyber threats, for instance, running a threat-informed defence, utilising automated platforms such as Breach-and-Attack Simulation (BAS), CISO’S can continuously test and validate their system. Similar to a fire drill, BAS can locate which controls are failing, allowing organisations to remediate the gaps in their defence, making them cyber ready before the attack occurs.


Cyber Resiliency: How CIOs Can Prepare for a Cloud Outage

Beyond security issues, cloud outages can open the door to cascading disruptions affecting both routine business and mission-critical applications. “This can lead to [issues] ranging from revenue loss to more serious impacts -- such as putting lives at risk in the case of critical health care applications,” explains Ravikanth Ganta, a senior director at business consulting firm Capgemini Americas. A cloud outage’s seriousness hinges on several factors, including organization preparedness, the zone regions affected, and the services impacted. “In many cases, businesses that build and run their applications in the cloud can endure a cloud outage with little to no impact if they architect their applications to take advantage of the automated failover capabilities readily available in the cloud,” Potter notes. Modular applications designed to leverage loosely coupled services will typically experience only a minor drop in availability or performance during a vendor outage and, in many cases, may not be affected all. “Customers that ... haven’t architected their applications to gracefully failover or redirect traffic to unimpacted zones or regions, will face greater availability challenges when a cloud provider experiences an outage,” Potter says.


Why DesignOps Matters: How to Improve Your Design Processes

“A foundational aspect of DesignOps is the adoption of agile work breakdown structures (WBSs) to organize UX work from alignment with broad strategic objectives to screen-level details in a single EAP tool. While this feels foreign to most UX practitioners at first, agile WBS maps quite well to UX work. The business and operational benefits of this approach are profound, including more accurate plans, estimates, tracking and reporting.” With a single working environment for managers, designers, developers, and even stakeholders as part of the DesignOps strategy, everyone can easily align their work and tasks, test and comment on prototypes in real time, eliminate design handoffs, reduce costly iterations, keep track of progress and identify bottlenecks. ... There’s no such thing as a designer who can handle every process and task because in the end, they do everything but the actual design. Digital product design is a multi-layered job that requires different experienced units in particular fields. Just as there is a need for a separation between UX and UI design with two distinct experts handling each, there is a need for a dedicated DesignOps person.



Quote for the day:

"The task of the leader is to get his people from where they are to where they have not been." -- Henry A. Kissinger