Daily Tech Digest - September 13, 2022

Data Analytics: The Ugly, But Crucial Step CEOs Can’t Ignore

Leaders only need to look around to see that the best CEOs are making data central to their business. This has become even more important as companies grapple with rising costs. Good data analytics allow companies to stay on top of their purchases and to roll costs over to their customers, a capability that is proving highly valuable these days in the manufacturing and automotive industries. Companies with low data maturity tend to keep data siloed, using different criteria across departments to collect and interpret it. This leads to missed opportunities from not integrating data to generate information at a granular level. They may know if they just had a good month but are not able to see how that breaks down on a per-item level or how it compares to other periods to give them a better understanding of “why” they had a good month and how they might be able to proactively make decisions to repeat or even further improve results. One manufacturing firm we know recently employed analytics to clean up its data and for the first time obtain a SKU-level visualization of the profitability of each item it sold.


Extended reality — where we are, what’s missing, and the problems ahead

What’s missing for immersion in the VR/MR is full-body instrumentation so you can move and interact in the virtual world(s) as you would in the real world. Hand scanning with cameras on a headset has not been very reliable and the common use of controllers creates a disconnect between how you want to interact with a virtual world and how you must react with it. This is particularly problematic with MR because you use your naked hand for touching real objects and the controller for touching rendered objects, which spoils the experience. Haptics, which Meta and others are aggressively developing, are only a poor stop-gap method; what’s needed is a way to seamlessly bring a person into the virtual world and allow full interaction and sensory perceptions as if it were the real world. AR standalone has had issues with occlusion, which are being worked on by Qualcomm and others. When corrected, rendered objects will look more solid and less like ghostly images that are partially transparent. But the use cases for this class are very well developed, making this the most attractive solution today.


Global companies say supply chain partners expose them to ransomware

Mitigation of ransomware risk should start at the organization level. “This would also help to prevent a scenario in which suppliers are contacted about breaches to pressure their partner organizations into paying up,” according to the research. In the last three years, 67% of respondents who had been attacked experienced this kind of blackmail to force payment. While ransomware mitigation starts inside the firewall, the research suggests that it must then be extended to the wider supply chain to help reduce the risk from the third-party attack surface. One of the best practices to reduce risk is to gain a comprehensive understanding of the supply chain itself, as well as corresponding data flows, so that high-risk suppliers can be identified. “They should be regularly audited where possible against industry baseline standards. And similar checks should be enforced before onboarding new suppliers,” according to the research. Some of the other practices include scanning open-source components for vulnerabilities/malware before they are used and built into CI/CD pipelines, running XDR programs to spot and resolve threats before they can make an impact, running continuous risk-based patching and vulnerability management.


Playwright: A Modern, Open Source Approach to End-To-End Testing

Contrary to other solutions, Playwright doesn’t use the WebDriver protocol. It leverages the Chrome DevTools protocol to communicate with Chromium browsers (Chrome/Edge) directly. This approach allows for more direct and quicker communication. Your tests will be more powerful and less flaky. But Playwright doesn’t stop at Chromium browsers. The team behind the project understood that cross-browser tests are essential for an end-to-end testing solution. They’re heavily invested in providing a seamless experience for Safari and Firefox, as well, and even Android WebView compatibility is in the works. Testing your sites in Chrome, Edge, Firefox and Safari is only a configuration matter. And this saves time and headaches! It’s not only about automating multiple browsers, though. If your tests are hard to write because you have to place countless “sleep” statements everywhere, your test suite will take hours to complete and become a burden. To avoid unnecessary waits, Playwright comes with “auto-waiting.” The idea is simple: Instead of figuring out when a button is clickable by yourself, Playwright performs actionability tests for you.


7 ways to create a more IT-savvy C-suite

Carter Busse, CIO at intelligent automation platform provider Workato, stresses the importance of networking with management peers. Each interaction provides an opportunity to ask questions, listen, and share information and insights. “We lack a water cooler in this remote world, but setting up biweekly meetings with my peers helps me understand their priorities and gives me an opportunity to communicate key knowledge,” Busse says. “These meetings also help build the trust that’s so crucial for success as a CIO.” Knowledge communicated to management peers should align with the enterprise’s basic mission. “As CIOs, we need to share our knowledge of the business first, followed by how the technology initiatives our team is working on are aligned with the company mission,” Busse says. “It’s important to work on a shared level of understanding first to ensure that the message lands.” ... Every enterprise leader has a different relationship to technology as well as a different level of IT knowledge. Creating personalized discussions, specific to both the enterprise and the leader’s role, will help develop a more tech-savvy C-suite, which can lead to improved support and adoption of proposed IT solutions.


Consider a mobile-first approach for your next web initiative

When going mobile first, it’s important to remember that content is king. Designers should focus on surfacing exactly the content a user needs and nothing more. Extra elements tend to distract from the user’s focus on the current task, and productivity suffers when screen real estate is limited. So, while it is typical to show all the options on a desktop view, well-designed mobile applications use context to decide what to show when and just as importantly, what not to show. It doesn’t mean mobile users can’t get to all those fine-grained options, it just means those options that don’t generally support the main use case are hidden behind low-profile UI constructs like collapsible menus and accordions. ... While more common in B2C apps, in recent years many B2B organizations are also taking advantage of mobile-first strategies. Because mobile-first development prioritizes the smallest screen, it effectively shifts focus and tough conversations around core functionality left. By starting with deciding how an app will look and operate on a smartphone before moving on to larger screens and devices, developers, designers and product owners quickly get alignment on what matters to users and customers.


AI Risk Intro 1: Advanced AI Might Be Very Bad

No one knows for sure where the ML progress train is headed. It is plausible that current ML progress hits a wall and we get another “AI winter” that lasts years. However, AI has recently been breaking through barrier after barrier, and so far does not seem to be slowing down. Though we’re still at least some steps away from human-level capabilities at everything, there aren’t many tasks where there’s no proof-of-concept demonstration. Machines have been better at some intellectual tasks for a long time; just consider calculators which are already superhuman at arithmetic. However, with the computer revolution, every task where a human has been able to think of a way to break it down into unambiguous steps (and the unambiguous steps can be carried out with modern computing power) has been added to this list. More recently, more intuition- and insight-based activities have been added to that list. DeepMind’s AlphaGo beat the top-rated human player of Go (a far harder game than chess for computers) in 2016. In 2017, AlphaZero beat both AlphaGo at Go (100-0) and superhuman chess programs at chess, despite training only by playing against itself for less than 24 hours.


Making Hacking Futile – Quantum Cryptography

There are many methods for exchanging quantum mechanical keys. The transmitter sends light signals to the receiver, or entangled quantum systems are employed. The scientists employed two quantum mechanically entangled rubidium atoms in two labs 400 meters apart on the LMU campus in the current experiment. The two facilities are linked by a 700-meter-long fiber optic cable that runs under Geschwister Scholl Square in front of the main building. To create an entanglement, the scientists first stimulate each atom with a laser pulse. Following this, the atoms spontaneously return to their ground state, each releasing a photon. The spin of the atom is entangled with the polarization of its emitted photon due to the conservation of angular momentum. The two light particles travel over the fiber optic cable to a receiver station, where a combined measurement of the photons reveals atomic quantum memory entanglement. To exchange a key, Alice and Bob – as the two parties are usually dubbed by cryptographers – measure the quantum states of their respective atoms.


Digital Transformation: Connecting The Dots With Web3

Let's remove the blindfold and have a look around. We can see that the metaverse of business interactions has multiple businesses or business contexts modeled as interconnected domains (and subdomains). In place of business boundaries naturally becoming system boundaries or bounded domain contexts, we now have systems at the enterprise level. You have spaghetti data integrations primarily driven by these systems and their interfaces. Still, the source of truth is fragmented across these multiple systems—whether it's a core operation, collaboration or content management system. Thanks to the advent of cloud computing, we have some solace in transcending these boundaries through a multitenant software/platform service. It's like we have built this world in silos as concrete islands first and then started erecting bridges as we discover more ways of interaction in the context of exchanging value. In a graph, you can picture systems and their integrations like nodes and edges. The digital transformation blueprint essentially translates to a specification for building the bridge between systems (both internal and external).


Third of IT decision-makers rely on gut feel when choosing network operator

Among the top line findings was that business leaders ranked trustworthiness, professionalism and experience as the top reasons for selecting a network operator. When asked whether consistent and transparent communication or speed (in terms of delivery and operations) was more important to them when choosing a network provider, 64% said communication was by far the prime practical quality required – speed was just 36% of the vote. However, decision-makers in the US are particularly driven by emotion, with 46% attributing more than half of their decision-making processes to it. Also, perceived “quality”, in a network services sense, was a broad and somewhat intangible concept, with no single commonly accepted definition. And while, for most leaders, network quality is a given – with service-level agreements (SLAs) acting as a key safety net – the survey suggested that it does not define or capture all the qualities that matter to decision-makers. In addition to this, 84% of decision-makers thought it should always be possible to speak with a customer services person without using chatbots or automated phone lines. In the US, 90% of leaders were adamant about this.



Quote for the day:

"Nobody is more powerful as we make them out to be." -- Alice Walker

Daily Tech Digest - September 12, 2022

What the 5G Future Means for Digital Workforce Management

Mobile carriers can divide, or “slice” networks into different tracks for different devices or applications. Organizations can enable devices and workstations to have separate networks, all on the same carrier. In practice, this looks a lot like rerouting traffic. A collaborative meeting that requires a lot of bandwidth won’t mean that another team experiences delays or poor network coverage. Organizations can have more control over how they distribute coverage to minimize lost time and productivity. Ultimately, this will make remote work more sustainable. While 2020 may have been the year of transitioning to working remotely, 2021 has proven so far that remote work is here to stay. ... We’re only beginning to see the potential of 5G and AI working in tandem. Recently, IBM partnered with Samsung to leverage AI for mobile devices operating on a 5G network. Their goal was to build a platform that generated alerts for firefighters and law-enforcement officers and addressed issues before they escalated.


The role of organisational culture in data privacy and transparency

In an era of mass personalisation and technological innovation, organisations increasingly need to make consideration of the way they use consumer data a part of their organisational culture. Since the GDPR’s inception back in May 2018, there have been some encouraging findings indicating that consumers are increasingly willing to share their data in exchange for personalised services and improved experiences. In addition, marketers are more confident about their reputation in the eyes of consumers. However, there is still a long way to go to improve consumer trust in marketing and highlight how data can be used as a force for good. Recent Adobe research reveals that over 75 per cent of UK consumers are concerned about how companies use their data. What’s more, an ICO report found that when consumers were asked if they trust brands with their data, little over a quarter (28 per cent) agreed. This proportion must be much larger if businesses are to truly thrive in the digital age. With technologies such as machine learning having a transformative impact on business, there is little doubt that, as they continue to evolve, the data sets they rely on will be key to a competitive advantage.


IoT software trends in 2023

IoT security has become crucial for organisations looking to successfully implement IoT solutions. This is because digital transformation acceleration has led to an influx of devices coming online. With the exponential growth in the number of devices now connected to the internet, the attack surface has also gotten significantly larger. Opportunistic cybercriminals now have more entry points – from insecure connections, and legacy devices to weak digital links – to take control of these IoT devices to spread malware or gain direct access into the network to obtain critical data. For IoT devices, the risks are doubly high for two reasons. Firstly, IoT devices typically do not come with in-built security functions, which makes them an easy target for hackers. Secondly, IoT devices, especially those that are small or light, can be easily misplaced or stolen. Unauthorised users who have gained physical possession of the devices can easily access your network. This is also why cybersecurity is now a huge area of focus for IoT devices and software. On the other hand, failure to secure IoT ecosystems could lead to eroding trust in their potential across the organisation, as well as wasted investment costs.


Microservices to Async Processing Migration at Scale

There are two potential sources of data loss. First: if the Kafka cluster itself were to be unavailable, of course, you might lose data. One simple way to address that would be to add an additional standby cluster. If the primary cluster were to be unavailable due to unforeseen reasons, then the publisher---in this case, Playback API---could then publish into this standby cluster. The consumer request processor can connect to both Kafka clusters and therefore not miss any data. Obviously, the tradeoff here is additional cost. For a certain kind of data, this makes sense. Does all data require this? Fortunately not. We have two categories of data for playback. Critical data gets this treatment, which justifies the additional cost of a standby cluster. The other less critical data gets a normal single Kafka cluster. Since Kafka itself employs multiple strategies to improve availability, this is good enough. Another source of data loss is at publish time. Kafka has multiple partitions to increase scalability. Each partition is served by an ensemble of servers called brokers. One of them is elected as the leader. 


Theroad ahead for workplace transformation with IoT, 5G, and Cloud

Connectivity is the bedrock of IoT solutions, and flexible infrastructure such as 5G can support expanding requirements. 5G also helps reimagine existing use cases and explore newer and transformative use cases that could not be supported by current connectivity technologies. By 2025, forecasts suggest as many as 75 billion IoT connected devices, nearly 3x the number in 2019. Of course, like all other technologies, networks will evolve to be self-optimized with automation, analytics, and artificial intelligence (AI) working across a multivendor cloud ecosystem. Telecom providers will, therefore, need to focus their network engineering efforts on extreme agility at scale, acceleration through execution excellence, and strong thought leadership and innovation. Essentially, IoT, 5G, and cloud technologies will play a crucial role in digital transformation of organizations across industry sectors, and move the enterprises towards Industry 4.0, a term popularized by Klaus Schwab, founder of the World Economic Forum, to represent the evolution of the fourth industrial revolution riding on increasing interconnectivity and smart automation. 


IT services firms face the heat from GCCs in war for talent

GCCs have emerged a serious influencer of tech talent supply as they control over a quarter of the total tech workforce in India, he said. “Deep-pocketed GCCs have the advantage of buying talent at a higher price tag as they are comparatively lower volume talent consumers. GCCs are hence known to trigger wage wars against IT service players and other cohorts of tech, especially on hot and niche skill sets. GCCs have hence not just constricted talent supply funnels but also made it pricier for the IT services sector," said Karanth. Wage hikes apart, GCCs also offer a huge brand pull by allowing fresh hires to directly engage with top global brands. Such talent used to earlier engage with some of these brands as employees of IT services firms on project deployments. According to data shared by Xpheno, 23% of talent from the IT services sector has had one or more career movements over a 12-month period through July. With the tech sector recording high attrition rates, ranging from 8% to 37%, the talent movement rate of 23% is in line with the average attrition rates seen in the industry during the period.


Reining in the Risks of Sustainable Sourcing

Sustainable sourcing starts with a basic requirement: “It’s essential to know who you’re buying from and where you’re buying,” O’Connell says. These decisions impact the environmental footprint -- including exposure to climate change, energy efficiency of the grid, production requirements, and circularity considerations. They also provide some direction about specific vendor or supplier risks. Vetting existing and new suppliers is essential. There’s a need to understand a partner’s sustainability goals and whether the firm is a good match. Their practices -- and their risks -- become part of a buyer’s practices and risks. “The engagement strategy should be tailored to drive collaboration and provide support to help both companies achieve their sustainability goals,” O’Connell explains. Ensuring that suppliers can produce enough plant-based materials, alternative fuels or low-carbon concrete is critical to mapping out a carbon reduction plan. Scarcity is a common problem with alternative materials and products. 


Quiet quitting: 9 IT leaders weigh in

“In some respects, IT leaders should be more concerned about the 'quiet quitters' in their workforce than those who actually leave the organization. Notwithstanding the inherent challenges of losing an employee, IT leaders can at least take proactive steps to replace the role with the appropriate talent and skill sets.The situation is not as clear when it comes to quiet quitters. IT leaders must approach quiet quitters with caution and take steps to determine the underlying root cause for this behavior. If 'disengagement' from work is the trigger, IT leaders must take remedial measures not to lose the employee 'emotionally' even though they are physically there. Physically absent employees are easier to replace than emotionally absent workers. ... “Quiet quitting is synonymous with healthy boundaries. So is this concept a good or bad thing? Should HR leaders be concerned? It boils down to the single-most valuable lesson the pandemic already taught us: managing employees is not what it used to be. Companies have to adapt. Now more than ever, we have to enable employees to succeed in a more autonomous and self-guided way, and part of that is integrating work into employees’ lives, not life into their work. 


US Sanctions Iranian Spooks for Albania Cyberattack

The sanctions are a demonstration that the United States is willing to use its sway over the global financial system to dissuade other governments from cyberattacks against allies, said Dave Stetson, a former attorney-adviser in the Office of the Chief Counsel at the Treasury Department's Office of Foreign Assets Control. Today's sanctions demonstrate "that the U.S. views those cyberattacks against third countries as affecting U.S. national security and foreign policy" and that the White House is prepared to "impose sanctions on the person who perpetuate those attacks," he told Information Security Media Group. Technically, the Specially Designated Nationals list of sanctioned entities only affects American institutions and individuals, but a new addition is actually a global event. Transactions between foreign entities can easily involve U.S. financial institutions. The federal government hasn't been shy about going after banks that do business with sanctioned individuals even if there's just a momentary nexus to an American financial institution, said Stetson, now a partner with law firm Steptoe & Johnson. Foreign banks also have reputational and customer selection concerns, he added.


The role that data will play in our future

Raising trust levels cannot be addressed in isolation: it requires high-level governance principles and guidelines. Governance frameworks (including data governance ones) must be in place if societies are to anticipate and shape the impact of emerging technologies. Their absence would create scenarios where the digital revolution, like all revolutions, eventually devours its own children. The realisation has emerged that if we are not able to leverage technology for bringing out the best in humans, we are potentially headed for scenarios in which society is fractured and some of our core organisational principles, such as democracy, can be perverted. The COVID crisis turned the digitisation priority into a digitisation imperative. In parallel, new tensions have appeared that could lead to a splintering of the Internet (splinternet), for example. Some would even argue that the metaverse that we seen emerging in front of us is already splintered from the start, and that its rapid far-west-style growth will lead to intractable issues if some sort of guiding principles are not adopted soon.



Quote for the day:

"People buy into the leader before they buy into the vision." -- John C. Maxwell

Daily Tech Digest - September 11, 2022

Technical Debt In Machine Learning System – A Model Driven Perspective

The biggest System Technical Debt with Machine Learning models is Explainability. With gaining popularity and its successful application in many domains, Machine Learning (ML) also faced increased skepticism and criticism. In particular, people question whether their decisions are well-grounded and can be relied on. As it is hard to comprehensively understand their inner workings after being trained, many ML systems — especially deep neural networks — are essentially considered black boxes. This makes it hard to understand and explain the behavior of a model. However, explanations are essential to trust that the predictions made by models are correct. This is particularly important when ML systems are deployed in decision support systems in sensitive areas impacting job opportunities or even prison sentences. Explanations also help to correctly predict a model’s behavior, which is necessary to avoid silly mistakes and identify possible biases. Furthermore, they help to gain a well-grounded understanding of a model, which is essential for further improvement and to address its shortcomings.


Monoliths to Microservices: 8 Technical Debt Metrics to Know

Technical debt is a major impediment to innovation and development velocity for many enterprises. Where is it? How do we tackle it? Can we calculate it in a way that helps us prioritize application modernization efforts? Without a data-driven approach, you may find your team falling into the 79% of organizations whose application modernization initiatives end in failure. In other articles, we’ve discussed the challenges of identifying, calculating and managing technical debt. ... How can you tell if the technical debt in your monolithic application is actually hurting your business? One of the most important metrics that determines investment decisions behind application modernization initiatives is “How much does it cost to keep around? The cost of innovation metric (Image 1) shows a breakdown that makes sense to executive decision-makers. How much, for each dollar spent, goes to simply maintaining the application, and how much goes toward innovating new features and functionality?


Major shift detected in smart home technology deployment

One of the key trends revealed was that home tech users’ growing appetite for internet of things (IoT) and smart home technologies shows no sign of slowing down. The study found that on a global basis, the average number of connected devices per home stood at 17.1 at the end of June 2022, up 10% compared with the same period a year previously. Europe showed the biggest change, with the average number of connected devices per Plume household increasing by 13% to 17.4. Plume-powered homes in the US were found to have the highest penetration of connected devices to date, with an average of 20.2 per home. With up to 10% more devices in Plume-powered households, there was an upward trend (11%) in data consumption across the Plume Cloud. However, the biggest decrease in data consumption was seen in fitness bikes, down by 23%, which likely reflects a change in consumer behaviour, with people returning to the office and exercising outdoors or at the gym as they adjust to the post-pandemic world of hybrid working.


Edge device onboarding: What architects need to consider

Your solution must also take device security into account. As part of every deployment, you will probably need to include sensitive data, such as passwords, certificates, tokens, or keys. How do you plan to distribute them? If you decide to inject those items into the images or templates, you create risk, since someone could access the image and extract that sensitive information. It's better to have the device download them at installation time using a secure channel. This means the edge device has to download these secrets from your central server. But how will you set up that secure channel? You could use encrypted communications or a virtual private network (VPN) tunnel, but that's not enough. How can you be sure that the device is what it says it is and not a possible attacker trying to steal information or gain access to your network? You have another concern: authentication and authorization. Authentication is even more important, especially for companies that use third-party providers to create the device images or add other value to the supply chain.


Governing Microservices in an Enterprise Architecture

Microservice development works best in a domain-driven architecture, which models the applications based on the organization’s real-world challenges. A domain-driven architecture assesses the enterprise infrastructure in light of business requirements and how to fulfill them. Most organizations already have a domain-driven design strategy in place that maps the architecture to business capabilities. Bounded Context is a strategy that is part of domain-driven design. Autonomous teams responsible for microservices are formed around areas of responsibility such as inventory management, product discovery, order management, and online transactions, i.e., bounded context. The domain expertise resides within the team, so the enterprise architect’s responsibility is to guide development to align with strategic goals, balancing immediate needs and future business objectives. When governing microservices as part of the enterprise, applying the C4 model for software architecture—context, containers, components and code—makes sense. 


The clash of organizational transformation and linear thinking

The task of organizational transformation in a complex world can be likened to that of herding cats. An extremely linear thinker, faced with 20 cats on the left side of a room and wanting to move them to the right, might pick up one cat, move it to the right, and repeat. Of course, that cat is unlikely to stay on the right side of the room, and our linear thinker is unlikely to outlast 20 cats. But it is possible to set conditions that will cause most, if not all, of the cats to end up on the right, like tilting the floor. ... Defining a clear purpose for an organizational transformation calls upon one of the most basic tasks of leadership: to show people the way forward, and to show why the new world they are being asked to build is superior to the old. The transformation must express the possibility of a new order and must be anchored in what would be considered breakthrough results. Without this clear purpose, the effort required to successfully transform the organization will not seem worthy of commitment on the part of those required to put it into action.


Why Today's Businesses Need To Focus On Digital Trust And Safety

Consumers are paying for the cybersecurity mistakes made by corporations. Ransomware continues to affect consumers, businesses, critical infrastructure and government entities, costing them millions of dollars. In 2021, more than 22 billion personal records were exposed in data breaches, with the Covid-19 pandemic accelerating credit card fraud and phishing attacks. All of this has left consumers more worried than ever about the privacy of their sensitive data. ... Websites and mobile apps rely on third parties to provide rich features like shopping carts, online payment, advertising, AI-based chat and customer support. But third-party code is rarely monitored for safety as today’s security tools lack the necessary insight. The result is enterprise digital assets are manipulated into channels that enable credit card skimming attacks, malicious advertising (malvertising), targeted ransomware delivery and worse. As this activity continues to rise, consumers feel increasingly less safe using their favorite platforms.


5 Steps to Successfully Reinvent Your Organization

Don't wait for something catastrophic to occur before you start trying to reinvent your business. Oftentimes, you will start to notice small, clear signals. Recognizing these warning signs early can mean the difference between a smooth reinvention process and one that's painful or difficult. What signals should you look out for? Take the job market, for example. We know that employees are leaving their jobs in record numbers. Microsoft found that as many as 41% of workers have plans to quit in the near future. The reasons, according to a Pew Research Center survey, are low pay (63%), lack of advancement opportunities (63%) and feeling disrespected at work (57%). Although salary increases might not be in the budget this year, you can stave off issues by reinventing your organization's culture or approach to advancement. ... Use your entire team's input and advice when trying to identify opportunities for experimentation. Arrive at a decision, execute, learn, and move on. If you fail, pivot quickly. Using agile methods when reinventing creates an environment where experimentation is safe and there is tolerance for failure. 


The Applications Of Data Science And The Need For DevOps

The importance of DevOps cannot be overstated. DevOps are experts who help developers, data scientists, and IT professionals collaborate on projects. Project managers, or their chain of command, oversee the work of developers. They constantly seek to acquire all product characteristics as quickly as possible. Regarding the IT professionals, they ensure that all networks, firewalls, and servers are operating correctly. For data scientists, this entails changing every model variable and structure. You might be wondering why DevOps is important in this industry. The solution is fairly straightforward. They serve as a liaison between developers and IT. DevOps has many key features, some of which are testing, packaging, integration, and deployment. They also deal with cybersecurity, in addition. ... Programming errors are the leading cause of the team’s failure. DevOps encourages regular code versions due to the constrained development cycle. This makes finding the flawed codes relatively straightforward. With this, the team may use their time better by employing robust programming concepts to reduce the likelihood of implementation failure.


How to Test Low Code Applications

In a low code platform, you build an application by means of a user interface. For instance, building screens by dragging and dropping items and building logic using process-like flows. This sounds simple but it can be very complex and error-prone. We’ve seen four generations of low code applications. First, there were small, simple, stand-alone applications. Then we have small apps on top of SAP, Oracle Fusion or Microsoft Dynamics. The third generation were business-critical but still small apps to offer extra functionality besides the ERP system. With these apps, you don’t have a workaround. Now we’re building big, complex, business-critical core systems that should be reliable, secure and compliant. The level of testing increases with every generation and in the fourth generation, we see that testing is only slightly different from testing high code applications. ... Testing is important if you want to limit the risks when you go into production. Especially when the application is critical for the users you should test it in a professional way, or when the application is technically seen as complex. 



Quote for the day:

"Leaders think and talk about the solutions. Followers think and talk about the problems." -- Brian Tracy

Daily Tech Digest - September 06, 2022

Taking Security Strategy to the Next Level: The Cyber Kill Chain vs. MITRE ATT&CK

There are 2 models that can help security professionals harden network resources and protect against modern-day threats and attacks: the cyber kill chain (CKC) and the MITRE ATT&CK framework. The CKC, developed by Lockheed Martin more than a decade ago, provides a high-level view of the sequence of a cyberattack from initial reconnaissance through weaponization and action. While it is widely used by security teams, it has its limitations. For example, host attack behaviors are not included in the model, and attackers may bypass or combine multiple steps. The newer MITRE ATT&CK framework maps closely to the CKC but focuses more on cyberresilience to withstand emergent threats. This open-source project also provides substantial support for tracing host attack behaviors. ... Present-day attacks utilize encryption over the network, making it very difficult to detect attack behaviors via the network itself. To overcome this limitation, enterprises typically deploy host security products alongside their network security products. Host security products might include traditional antivirus programs, endpoint detection and response (EDR) solutions or endpoint protection platforms (EPPs). 


Why Cloud Databases Need to Be In Your Tech Stack

Companies need to operate at a constantly increasing scale — more data, more speed, more customer touchpoints. IDC estimates that there will be 41.6 billion connected IoT devices, or “things,” generating 79.4 zettabytes (ZB) of data in 2025. The only way to keep up with this moving train is to have a cloud database that can handle huge amounts of data and can do so with extreme agility and low latency. There are two types of scaling: horizontal (adding more nodes to a system) and vertical (adding more resources to a single node). Relational databases of old are not elastic, as in they cannot scale based on the volume and velocity of data access. They are built more like airplanes. If you want to add 20 more seats to your flight, you have to get a new plane that is built with 20 more seats. In other words, you can’t extend this plane to accommodate 20 more passengers. This is vertical scaling. Cloud databases are built more like trains. If you want to add 20 more seats to your popular train route, all you have to do is add another coach. On the other hand, cloud databases are more like trains. If you want to add 20 more seats to your popular train route, all you have to do is add another coach.


Report: Organ Transplant Data Security Needs Strengthening

The newest criticism comes from a federal watchdog review of the Health Resources and Services Administration and the nonprofit United Network of Organ Sharing. As of January, nearly 107,000 individuals were candidates on the Organ Procurement and Transplantation Network waitlist. OPTN is designated by the federal government as a "high-value asset." UNOS, which manages its network at the administration's behest, lacked system monitoring and only had draft procedures for access controls when federal auditors conducted their review. The OPTN "is a very 'just in time' system where the time between an organ becoming available and getting it into the right patient can be measured in days or even hours," says Benjamin Denkers, chief innovation officer at consultancy CynergisTek. "Hackers breaching the system could create any number of disruptions to the system connecting available organs with patients in need." A statement from an UNOS spokeswoman shared with Information Security Media Group notes that auditors concluded that "OPTN security controls 'protect the confidentiality, integrity, and availability of transplant data.'"


Spinning uncertainty into success

The Upside of Uncertainty delivers helpful takeaways and, perhaps most important, offers anyone struggling with a murky future the courage to persevere. The book also contains useful insights into shifting one’s perspective in tough times, describing entrepreneurial heuristics that can help shrewd thinkers tap into potential opportunity. For example: pressing on when uncertainty emerges, even at the risk of failure; reframing failure as an opportunity for learning and adaptation; exploiting resources and skills at hand instead of investing too deeply in research before experimenting; and thinking entrepreneurially by leveraging existing resources in new ways. They cite the example of Pokémon Go, which was created by a multiplayer-game designer and digital mapping expert who’d helped create what became Google Maps. He realized that Google Maps’ geopositioning technology could be paired with Pokémon characters to form an engaging augmented reality game. Similarly, the founders of Traveling Spoon, a startup that connects food-focused travelers with local home cooks, saw entrepreneurial potential hiding in plain sight when a local woman shared a delicious homemade meal with them in Mexico.


Design For Security Now Essential For Chips, Systems

“There’s a real danger in security, because of its complications and being really hard to understand, to run into the equivalent of what in sustainability is called green-washing,” said Frank Schirrmeister, senior group director, solutions and ecosystem at Cadence. “This is ‘secure-washing,’ and while there may be government regulations, it’s all about customers in the commercial world. Semiconductor companies and system vendors have to serve their end customers, and for them it’s like selling insurance. You really didn’t know that you needed security until you ran into a real issue. That’s when they say, ‘If I just would have had insurance.’ But how to implement it is really an intricate issue, and it’s hard to understand from technology perspective. I fear it may be similar to a clean energy ‘Energy Star’ sticker on a washing machine, which may just mean, ‘Yes, I have documented processes.’ That’s why I think there’s a danger of secure-washing, where the end consumer is lulled into a sense that ‘this thing is secure,’ without really understanding what’s underneath, who confirmed it, and what the process was. That’s why standardization is crucial. But it also needs to be transparent.”


The risks of neglecting data governance

Data governance will make or break your organisation’s reputation. The impact of the brand degradation that businesses are likely to suffer once their lax approach to data protection is revealed could be significant. No one wants to transact with a business that will not protect their data. In fact, data protection is set to become the next ‘badge of honour’ for businesses. Whilst sustainability, diversity and fair trade have previously been accolades that customers look for when choosing which businesses to interact with, being a data guardian is a growing phenomenon. The reputational impact that a GDPR fine can have on a business is, therefore, huge and can result in significant customer loss. With the growth of competition in many markets, it is easy for customers to find an alternative. Financially, this loss will often amount to more than the fine itself. Such negligence can also have a negative impact on your supply chain. As with customers, partners, suppliers, and service providers will also choose not to work with organisations who fail to comply with standards such as GDPR.


Choosing the Right Cloud Infrastructure for Your SaaS Start-up

The first consideration is the company’s ability to manage the infrastructure, including the time required, whether humans are needed for the day-to-day management, and how resilient the product is to future changes. If the product is used primarily by enterprises and demands customization, then you may need to deploy the product multiple times, which could mean more effort and time from the infra admins. The deployment can be automated, but the automation process requires the product to be stable. ROI might not be good for an early-stage product. My recommendation in such cases would be to use managed services such as PaaS for infrastructure, managed services for the database/persistent, and FaaS—Serverless architecture for compute. ... And the key to fast development to release is to spend more time in coding and testing than in provisioning and deployments. Low-code and no-code platforms are good to start with. Serverless and FaaS are designed to solve these problems. If your system involves many components, building your own boxes will consume too much time and effort. Similarly, setting up Kubernetes will not make it faster.


Edge infrastructure: 7 key facts CIOs should know about security

There is no blanket security solution that will mitigate every risk – that’s true at the edge, in the cloud, and in your datacenter or corporate offices. Your IT stack has multiple layers; even a single application has multiple layers. Your security posture should, too. Edge computing boosts the case for a multi-layered approach to security. This whitepaper describes a layered approach to container and Kubernetes security. While the details may differ in an edge environment, the core concept here remains relevant: A well-planned mix (or layers) of processes, policies, and tools – that lean heavily on automation wherever possible – is vital to securing inherently distributed systems. ... “You have to ensure that you enforce security controls at the granularity of the edge location, and that any edge location that is breached can be isolated away without impacting all the other edge locations,” says Priya Rajagopal, director of product management at Couchbase. This is similar in concept to limiting “east-west” traffic and other forms of isolation and segmentation in container and Kubernetes security. There’s no such thing as zero risk – things happen. 


How to Optimize Your Organization for Innovation

Building a culture that encourages creativity usually requires starting small and supporting frequent iteration. “Be willing to try ideas and approaches that may not work,” suggests Christine Livingston, managing director in the emerging technology practice at business consulting firm Protiviti. Employee-led technology advisory teams and initiative groups allow staffers to feel a sense of ownership while finding solutions to complex issues, observes Susan Tweed, vice president of enterprise technologies at analytics, artificial Intelligence and data management software and services provider SAS. “People can participate in ways that maximizes their strengths,” she says. “Some participants may be great at throwing out ideas while others love the challenge of digging deep to validate the solutions identified as the best options.” Giving teams the freedom to experiment is essential. “When teams are offered the space to create, try, fail, and try again, they are given the opportunity to learn from those experiences and bring that insight into their next projects,” Hapanowicz says.


Protect the Pipe! Secure CI/CD Pipelines With a Policy-Based Approach

Improved security for production systems has forced attackers to look for other avenues. The improvements may be due to the increase in cloud and managed services and general security awareness and availability of tools. With the adoption of programmable infrastructure and Infrastructure-as-Code (IaC), build, and delivery systems now have access to production systems. This means a compromise in the build system can be used to access production systems and, in the case of a software vendor, access to customer environments. Applications are increasingly composed of hundreds of OSS and commercial components. This increases the application exposure and presents several ways to add malicious code to an application. All of these factors contributed to attackers shifting focus to Continuous Integration and Continuous Delivery (CI/CD) systems as an easier target to infiltrate multiple production systems. Therefore, it is essential that organizations give equal consideration to securing our CI/CD pipelines, just as they do their production workloads.



Quote for the day:

"Superlative leaders are fully equipped to deliver in destiny; they locate eternally assigned destines." -- Anyaele Sam Chiyson

Daily Tech Digest - September 05, 2022

How to handle a multicloud migration: Step-by-step guide

The first order of business is to determine exactly what you want out of a multicloud platform; what needs are in play, which functions and services should be relocated, which ones may or should stay in house, what constitutes a successful migration, and what advantages and pitfalls may arise? You may have a lead on a vendor offering incentives or discounts, or company regulations may prohibit another type of vendor or multicloud service, and this should be part of the assessment. The next step is to determine what sort of funding you have to work with and match this against the estimated costs of the new platform based on your expectations as to what it will provide you. There may be a per-user or per-usage fee, flat fees for services, annual subscriptions or specific support charges. It may be helpful to do some initial research on average multicloud migrations or vendors offering the services you intend to utilize to help provide finance and management a baseline as to what they should expect to allocate for this new environment, so there are no misconceptions or surprises regarding costs.


Intro to blockchain consensus mechanisms

Every consensus mechanism exists to solve a problem. Proof of Work was devised to solve the problem of double spending, where some users could attempt to transfer the same assets more than once. The first challenge for a blockchain network was thus to ensure that values were only transferred once. Bitcoin's developers wanted to avoid using a centralized “mint” to track all transactions moving through the blockchain. While such a mint could securely deny double-spend transactions, it would be a centralized solution. Decentralizing control over assets was the whole point of the blockchain. Instead, Proof of Work shifts the job of validating transactions to individual nodes in the network. As each node receives a transaction, it attempts the expensive calculation required to discover a rare hash. The resulting "proof of work" ensures that a certain amount of time and computing power were expended by the node to accept a block of transactions. Once a block is hashed, it is propagated to the network with a signature. Assuming it meets the criteria for validity, other nodes in the network accept this new block, add it to the end of the chain, and start work on the next block as new transactions arrive.


Data’s Struggle to Become an Asset

Data’s biggest problem is that it is intangible and malleable. How can you attach a value to something that is always changing, may disappear, and has no physical presence beyond the bytes it appropriates in a database? In many organizations, there are troves of data that are collected and never used. Data is also easy to accumulate. Collectively, these factors make it easy for corporate executives to view data as a commodity, and not as something of value. Researchers like Deloitte argue that data will never become an indispensable asset for organizations unless it can deliver tangible business results: “Finding the right project requires the CDO (chief data officer) to have a clear understanding of the organization's wants and needs,” according to Deloitte. “For example, while developing the US Air Force’s data strategy, the CDO identified manpower shortages as a critical issue. The CDO prioritized this limitation early on in the implementation of the data strategy and developed a proof of concept to address it.”


In The Face Of Recession, Investing In AI Is A Smarter Strategy Than Ever

Many business leaders make the mistake of overspending on RPA platforms, blinded by the promise of some future ROI. In reality, due to the need to customize RPA to every client, these decision-makers don’t actually know how long it will take to begin reaping the benefits—if they ever do. I, myself, have made this mistake in the past, spending far too much time and money on a tedious RPA solution that was intended to solve a customer success back-office function, only to find that after the overhead of managing it, the gains were marginal. If business leaders want to fully maximize their investments and reap quicker benefits, they’ll go one giant leap beyond automation, landing in the realm of autonomous artificial intelligence (AI). True AI solutions, which continually learn from a company’s data to become increasingly accurate with time, are the holy grail of ROI. Finance leaders are in a great position to lead the way within their own companies by implementing AI solutions in the accounting function. Across industries, these teams are sagging under the weight of endless, tedious accounting tasks, using outdated, ineffective technology and wasting significant time fixing human errors.


Top 8 Data Science Use Cases in The Finance Industry

Financial institutions can be vulnerable to fraud because of their high volume of transactions. In order to prevent losses caused by fraud, organizations must use different tools to track suspicious activities. These include statistical analysis, pattern recognition, and anomaly detection via machine/deep learning. By using these methods, organizations can identify patterns and anomalies in the data and determine whether or not there is fraudulent activity taking place. ... Tools such as CRM and social media dashboards use data science to help financial institutions connect with their customers. They provide information about their customers’ behavior so that they can make informed decisions when it comes to product development and pricing. Remember that the finance industry is highly competitive and requires continuous innovation to stay ahead of the game. Data science initiatives, such as a Data Science Bootcamp or training program, can be highly effective in helping companies develop new products and services that meet market demands. Investment management is another area where data science plays an important role. 


A Bridge Over Troubled Data: Giving Enterprises Access to Advanced Machine Learning

Thankfully, the smart data fabric concept removes most of these data troubles, bridging the gap between the data and the application. The fabric focuses on creating a unified approach to access, data management and analytics. It builds a universal semantic layer using data management technologies that stitch together distributed data regardless of its location, leaving it where it resides. A fintech organisation can build an API-enabled orchestration layer, using the smart data fabric approach, giving the business a single source of reference without the necessity to replace any systems or move data to a new, central location. Capable of in-flight analytics, more advanced data management technology within the fabric provides insights in real time. It connects all the data including all the information stored in databases, warehouses and lakes and provides the vital and seamless support for end-users and applications. Business teams can delve deeper into the data, using advanced capabilities such as business intelligence. 


Why You Should Start Testing in the Cloud Native Way

Consistently tracking metrics around QA and test pass/failure rates is so important when you’re working in global teams with countless different types of components and services. After all, without benchmarking, how can you measure success? TestKube does just that. Because it’s aware of the definition of all your tests and results, you can use it as a centralized place to monitor the pass/failure rate of your tests. Plus it defines a common result format, so you get consistent result reporting and analysis across all types of tests. ... If you run your applications in a non-serverless manner in the cloud and don’t use virtual machines, I’m willing to bet you probably use containers at this point and you might have faced the challenges of containerizing all your testing activities. Well, with cloud native tests in Testkube, that’s not necessary. You can just import your test files into Testkube and run them out of the box. ... Having restricted access to an environment that we need to test or tinker with is an issue that most of us face at some point in our careers.


Why IT leaders should prioritize empathy

It’s simple enough to practice empathy outside of work, but IT challenges make practicing empathy at work a bigger struggle. Fairly or unfairly, many customers expect technology to work 100 percent of the time. When it doesn’t, it falls on IT leaders to go into crisis mode. Considering many of these applications are mission-critical to the customer’s organizational performance, their reaction makes sense. An unempathetic employee in this situation would ignore the context behind a customer’s emotional response. They might go on the defensive or fail to address the customer’s concerns with urgency. A response like this can prove detrimental to customer loyalty and retention – it takes up to 12 positive customer experiences to make up for one negative experience. Every workplace consists of many different personality types and cultural backgrounds – all with different understandings of and comfort toward practicing empathy. Because of this diversity, aligning on a single company-wide approach to empathy is easier said than done. Yet if your organization fails to secure employee buy-in around the importance of empathy, you risk alienating your customers and letting employees who aren’t well-versed in empathetic communication hold you back.


What devops needs to know about data governance

Looking one step beyond compliance considerations, the next level of importance that drives data governance efforts is trust that data is accurate, timely, and meets other data quality requirements. Moses has several recommendations for tech teams. She says, “Teams must have visibility into critical tables and reports and treat data integrity like a first-class citizen. True data governance needs to go beyond defining and mapping the data to truly comprehending its use. An approach that prioritizes observability into the data can provide collective significance around specific analytics use cases and allow teams to prioritize what data matters most to the business.” Kirk Haslbeck, vice president of data quality at Collibra, shares several best practices that improve overall trust in the data. He says, “Trusted data starts with data observability, using metadata for context and proactively monitoring data quality issues. While data quality and observability establish that your data is fit to use, data governance ensures its use is streamlined, secure, and compliant. Both data governance and data quality need to work together to create value from data.”


The Power of AI Coding Assistance

“With AI-powered coding technology like Copilot, developers can work as before, but with greater speed and satisfaction, so it’s really easy to introduce,” explains Oege De Moor, vice president of GitHub Next. “It does help to be explicit in your instructions to the AI.” He explains that during the Copilot technical preview, GitHub heard from users that they were writing better and more precise explanations in code comments because the AI gives them better suggestions. “Users also write more tests because Copilot encourages developers to focus on the creative part of crafting good tests,” De Moor explains. “So, these users feel they write better code, hand in hand with Copilot.” He adds that it is, of course, important that users are made aware of the limitations of the technology. “Like all code, suggestions from AI assistants like Copilot need to be carefully tested, reviewed, and vetted,” he says. “We also continuously work to improve the quality of the suggestions made by the AI.” GitHub Copilot is built with Codex -- a descendent of GPT-3 -- which is trained on publicly available source code and natural language.



Quote for the day:

"Great Groups need to know that the person at the top will fight like a tiger for them." -- Warren G. Bennis

Daily Tech Digest - September 01, 2022

Cloud Applications Are The Major Catalysts For Cyber Attachks

Those cybersecurity threats have sky-high substantially in recent because criminals have built lucrative businesses from stealing data and nation-states have come to see cybercrime as an opportunity to acquire information, influence, and advantage over their rivals. This has made a path for potential catastrophic attacks such as the WannaCrypt ransomware campaign which was being displayed in recent headlines. This evolving threat landscape has begun to change the way customers view the cloud. “It was only a few years ago when most of my customer conversations started with, ‘I can’t go to the cloud because of security. It’s not possible,’” said Julia White, Microsoft’s corporate vice president for Azure and security. “And now I have people, more often than not, saying, ‘I need to go to the cloud because of security.’” It’s not an exaggeration to say that cloud computing is completely changing our society. It’s ending major industries such as the retail sector, enabling the type of mathematical computation that is uplifting an artificial intelligence revolution and even having a profound impact on how we communicate with friends, family, and colleagues.


Intel AI chief Wei Li: Someone has to bring today's AI supercomputing to the masses

As is often the case in technology, everything old is new again. Suddenly, says Li, everything in deep learning is coming back to the innovations of compilers back in the day. "Compilers had become irrelevant" in recent years, he said, an area of computer science viewed as largely settled. "But because of deep learning, the compiler is coming back," he said. "We are in the middle of that transition." In his his PhD dissertation at Cornell, Li developed a computer framework for processing code in very large systems with what are called "non-uniform memory access," or NUMA. His program refashioned code loops for the most amount of parallel processing possible. But it also did something else particularly important: it decided which code should run depending on which memories the code needed to access at any given time. Today, says Li, deep learning is approaching the point where those same problems dominate. Deep learning's potential is mostly gated not by how many matrix multiplications can be computed but by how efficiently the program can access memory and bandwidth.


Event Streaming and Event Sourcing: The Key Differences

Event streaming employs the pub-sub approach to enable more accessible communication between systems. In the pub-sub architectural pattern, consumers subscribe to a topic or event, and producers post to these topics for consumers’ consumption. The pub-sub design decouples the publisher and subscriber systems, making it easier to scale each system individually. The publisher and subscriber systems communicate through a message broker like Apache Pulsar. When a state changes or an event occurs, the producer sends the data (data sources include web apps, social media and IoT devices) to the broker, after which the broker relates the event to the subscriber, who then consumes the event. Event streaming involves the continuous flow of data from sources like applications, databases, sensors and IoT devices. Event streams employ stream processing, in which data undergoes processing and analysis during generation. This quick processing translates to faster results, which is valuable for businesses with a limited time window for taking action, as with any real-time application.


Big cloud rivals hit back over Microsoft licensing changes

In a nutshell, the changes that come into effect from October allow customers with Software Assurance or subscription licenses to use these existing licenses "to install software on any outsourcers' infrastructure" of their choice. But as The Register noted at the time, this specifically excludes "Listed Providers", a group that just happens to include Microsoft's biggest cloud rivals – AWS, Google and Alibaba – as well as Microsoft's own Azure cloud, in a bid to steer customers to Microsoft's partner network. ... These criticisms are not entirely new, and some in the cloud sector made similar points following Microsoft's disclosure of some of the licensing changes it intended to make back in May. One cloud operator who requested anonymity told The Register in June that Redmond's proposed changes fail to "move the needle" and ignore the company's "other problematic practices." Another AWS exec, Matt Garman, posted on LinkedIn in July that Microsoft's proposed changes did not represent fair licensing practice and were not what customers wanted.


Machine learning at the edge: The AI chip company challenging Nvidia and Qualcomm

Built on 16nm technology, the MLSoC’s processing system consists of computer vision processors for image pre- and post-processing, coupled with dedicated ML acceleration and high-performance application processors. Surrounding the real-time intelligent video processing are memory interfaces, communication interfaces, and system management — all connected via a network-on-chip (NoC). The MLSoC features low operating power and high ML processing capacity, making it ideal as a standalone edge-based system controller, or to add an ML-offload accelerator for processors, ASICs and other devices. The software-first approach includes carefully-defined intermediate representations (including the TVM Relay IR), along with novel compiler-optimization techniques. ... Many ML startups are focused on building only pure ML accelerators and not an SoC that has a computer-vision processor, applications processors, CODECs, and external memory interfaces that enable the MLSoC to be used as a stand-alone solution not needing to connect to a host processor. Other solutions usually lack network flexibility, performance per watt, and push-button efficiency – all of which are required to make ML effortless for the embedded edge.


Why CIOs Need to Be Even More Dominant in the C-Suite Right Now

“Now more than ever, we’re seeing a pressing demand for CIOs to deliver digital transformation that enables business growth to energize the top line or optimize operations to eliminate cost and help the bottom line,” says Savio Lobo, CIO of Ensono. This requires the CIO to have a deep understanding of the business and surface decisions that may influence these objectives. Large-scale digital solutions and capabilities, however, often cannot be implemented simultaneously, especially when they require significant change in how customers and staff engage with people and processes. This means ruthless prioritization decisions may need to be made that include what is moving forward at any given time and equally importantly, what is not. “While executing a large initiative, there will also be people, process and technology choices to be made and these need to be made in a timely manner,” Lobo adds. This may look unique for every organization but should include collaboration on the discovery and implementation and an open feedback loop for how systems and processes are working or not working in each stakeholder’s favor. 


Ensuring security of data systems in the wake of rogue AI

A ‘Trusted Computing’ model, like the one developed by the Trusted Computing Group (TCG), can be easily applied to all four of these AI elements in order to fully secure a rogue AI. Considering the data set element of an AI, a Trusted Platform Module (TPM) can be used to sign and verify that data has come from a trusted source. A hardware Root of Trust, such as the Device Identifier Composition Engine (DICE), can make sure that sensors and other connected devices maintain high levels of integrity and continue to provide accurate data. Boot layers within a system each receive a DICE secret, which combines the preceding secret on the previous layer with the measurement of the current one. This ensures that when there is a successful exploit, the exposed layer’s measurements and secrets will be different, securing data and protecting itself from any data disclosure. DICE also automatically re-keys the device if a flaw is unearthed within the device firmware. The strong attestation offered by the hardware makes it a great tool to discover any vulnerabilities in any required updates.


The Implication of Feedback Loops for Value Streams

The practical implication for software engineering management is to first address feedback loops that generate a lot of bugs/issues to get your capacity back. For example, if you have a fragile architecture or code of low maintainability that requires a lot of rework after any new change implementation, it is obvious that refactoring is necessary to regain engineering productivity; otherwise, engineering team capacity will be low. The last observation is that the lead time will depend on the simulation duration, the longer you run the value stream, the higher the number of lead times variants you will get. Such behavior is the direct implication of the value stream structure with the redo feedback loop and its probability distribution between the output queue and the redo queue. If you are an engineering manager who inherited legacy code with significant accumulated debt, it might be reasonable to consider incremental solution rewriting. Otherwise, the speed of delivery will be very slow forever, not only for the modernization time. The art of simplicity; greater complexity yields more variations which increase the probability of results occurring outside of acceptable parameters. 


Beat these common edge computing challenges

Realizing the benefits of edge computing depends on a thoughtful strategy and careful evaluation of your use cases, in part to ensure that the upside will dwarf the natural complexity of edge environments. “CIOs shouldn’t adopt or force edge computing just because it’s the trendy thing – there are real problems that it’s intended to solve, and not all scenarios have those problems,” says Jeremy Linden, senior director of product management at Asimily. Part of the intrinsic challenge here is that one of edge computing’s biggest problem-solution fits – latency – has sweeping appeal. Not many IT leaders are pining for slower applications. But that doesn’t mean it’s a good idea (or even feasible) to move everything out of your datacenter or cloud to the edge. “So for example, an autonomous car may have some of the workload in the cloud, but it inherently needs to react to events very quickly (to avoid danger) and do so in situations where internet connectivity may not be available,” Linden says. “This is a scenario where edge computing makes sense.” In Linden’s own work – Asimily does IoT security for healthcare and medical devices – optimizing the cost-benefit evaluation requires a granular look at workloads.


Tenable CEO on What's New in Cyber Exposure Management

Tenable wants to provide customers with more context around what threat actors are exploiting in the wild to both refine and leverage the analytics capabilities the company has honed, Yoran says. Tenable must have context around what's mission-critical in a customer's organization to help clients truly understand their risk and exposure rather than just add to their cyber noise, he adds. Tenable has spent more on vulnerability management-focused R&D over the past half-decade than its two closest competitors combined, which has allowed the firm to deliver differentiated capabilities, Yoran says. Unlike competitors who have expanded their offerings to include everything from logging and SIEM to EDR and managed security services, Yoran says Tenable has remained laser-focused on risk. "The three primary vulnerability management vendors have three very different strategies and they've been on divergent paths for a long time," Yoran says. "For us, the key to success has been and will continue to be that focus on helping people assess and understand risk."



Quote for the day:

"Get your facts first, then you can distort them as you please." -- Mark Twain