Daily Tech Digest - May 09, 2022

Does low code make applications overly complex?

To be clear, the inevitable outcome of low code is not necessarily complexity. Just like traditional application development, complexity can and often does make its way into the lifecycle of the product code base. While not inevitable, it is common. There are many steps you can take to reduce complexity in apps regardless of how they are built, which improves performance, scalability, availability, and speed of innovation. Yes, a low code application, like all applications, can become complex, and requires the use of simplification techniques to reduce complexity. But these issues are not tied to the use of low code. They are just as significant in regular product development processes. What low code does increase is the amount of code in your application that was not written directly by your development team. There is more code that was auto-generated by the low code platform, or included in libraries required for your application to function, but was not the product of your developers. Thus there is often more “unknown” code in your application when you use low code techniques. But unknown is not the same thing as complexity.


Ultra-fast Microservices: When Microstream Meets Wildfly

Microservices provide several challenges to software engineers, especially as a first step to facing distributed systems. But it does not mean that we're alone. Indeed there are several tools to make our life easier in the Java world, especially MicroProfile. MicroProfile has a goal to optimize enterprise Java for a microservices architecture. It is based on the Java EE/Jakarta EE standard plus API specifically for microservices such as a REST Client, Configuration, Open API, etc. Wildfly is a powerful, modular, and lightweight application server that helps you build amazing applications. ... Unfortunately, we don't have enough articles that talk about it. We should have a model, even the schemaless databases, when you have more uncertain information about the business. Still, the persistence layer has more issues, mainly because it is harder to change. One of the secrets to making a scalable application is statelessness, but we cannot afford it in the persistence layer. Primarily, the database aims to keep the information and its state.


CPaaS – a technology for the future

What has made CPaaS the go-to method for customer engagement is the ubiquity of cloud technology and how it has transformed the way businesses operate. “Companies had to come up with different ways to interact with customers,” says IDC research VP Courtney Munroe, who points out that in the last few years there has been a steady move to cloud and, in particular, there has been a confluence of mobility and cloud. “More people use smartphones and companies realised that they could develop apps for them,” he says. Steve Forcum, chief evangelist at Avaya, is also aware of the importance of cloud within enterprises looking to engage with customers. “Some customers may keep elements of their communications stack in their datacentres, but more are then infusing cloud-based capabilities,” he says. “We’ve moved to help customers across this spectrum by bringing cloud-based benefits to their datacentres.” But the technology on its own is in second place to the need that companies have to be more responsive to customers. The underlying drive towards CPaaS is the need to offer a more flexible way to interact with customers.


How Should you Protect your Machine Learning Models and IP?

The most concerning threat is frequently “Will releasing this make it easy for my main competitor to copy this new feature and hurt our differentiation in the market?”. If you haven’t spent time personally engineering ML features, you might think that releasing a model file, for example as part of a phone app, would make this easy, especially if it’s in a common format like a TensorFlow Lite flatbuffer. In practice, I recommend thinking about these model files like the binary executables that contain your application code. By releasing it you are making it possible to inspect the final result of your product engineering process, but trying to do anything useful with it is usually like trying to turn a hamburger back into a cow. Just as with executables you can disassemble them to get the overall structure, by loading them into a tool like Netron. You may be able to learn something about the model architecture, but just like disassembling machine code it won’t actually give you a lot of help reproducing the results. Knowing the model architecture is mildly useful, but most architectures are well known in the field anyway, and only differ from each other incrementally.


The new cybersecurity mandate

Bearing security in mind at all times rings true, as it inspires us to think about what the security implications are as we are making changes. On the other hand, it has something of a resemblance to the old premature performance optimization debate. We’re not going to wade into that here (or the test-driven development debate, or any other similar one). I just want to point out that software development is latent with complexity and obstacles to action. Security considerations must be harmonized into the equation. The next bullet point in the fact sheet makes the following statement: “Develop software only on a system that is highly secure and accessible only to those actually working on a particular project.” This one makes the reader pause for a moment. It seems to have arrived at the conclusion that in order to build secure systems, we should build secure systems. If we are patient, the next sentence helps deliver the full meaning: “This will make it much harder for an intruder to jump from system to system and compromise a product or steal your intellectual property.” What the framers of this fact sheet are driving at here is actually something like a rephrasing of zero trust architecture.


US Passes Law Requiring Better Cybercrime Data Collection

The impact of this legislation depends entirely on the usefulness of the taxonomy itself, says Jennifer Fernick, senior vice president and global head of research at security consultancy NCC Group. "The authors of that taxonomy need to meaningfully answer what data points about cybercrime will enable meaningful intervention for the future prevention of these crimes," Fernick, who is also a National Security Institute visiting technologist fellow at George Mason University, tells Information Security Media Group. "It is important, for example, to distinguish at minimum between computer-related crimes that attack human judgment or exploit edge cases in business processes from crime that is enabled through specific hardware or software flaws that can be exploited by criminals attacking an organization's IT infrastructure. In the latter case, it would be valuable in particular to identify the specific software or hardware components, or even specific security vulnerabilities or CVEs, which served as the substrate for the attack, to help inform organizations about where they would most benefit from strengthening their cybersecurity defenses," Fernick says.


How smart data capture is innovating the air travel experience

Using smart data capture on mobile devices has multiple benefits. Unlike fixed scanners, it enables customer service agents to perform multiple tasks anywhere in the airport. Airlines can automate processes such as check-in, security queues, lounge access, and luggage management, providing a modern, sleek impression from the first moment a passenger enters the terminal. Compared with the old approach of using rugged devices at fixed stations, smart data capture on mobile devices delivers significant customer benefits and staff efficiencies. Airport queues have been big news recently, but with staff equipped with smart mobile devices, waiting times can be cut as they can patrol queues and scan IDs, passports and QR codes to speed passengers through check-in and deliver a more personalised experience — accessing details about a passenger’s seat preferences or dietary requirements, for example. Customer service agents using smart mobile devices can easily manage oversized luggage presented at the gate and quickly check it into the hold.


Are Blockchain and Decentralized Cloud Making Much Headway?

Basically, the value of decentralized cloud in its current form boils down to the circumstances and needs of the users. “If you’re setting up a mining node and need some cloud power, why would you want to pay AWS?” Litan asks. A decentralized cloud might be cheaper to run in such cases, she says, which appeals to miners who want cheap computing in order to make money on the margins. At the moment, when many developers write applications, they look to the most readily available cloud service, Litan says, and then wind up deploying on the main blockchain where there is no control over where Ethereum or Bitcoin run. “It’s like saying, ‘Where’s the internet running?’” There is some possibility for blockchain and decentralized cloud to gain more momentum down the road, but for now their impact on the entirety of cloud computing remains rather niche. “It may become more important as people start writing compute-intensive workloads and they want to keep the cost down,” Litan says. Decentralized cloud computing may also be useful for organizations running non-blockchain applications, she says. 


IT hiring: Assumptions and truths about the current talent shortage

It can be difficult to drive growth when teams are stretched and global tensions are high, as they have been for the better part of two years. New process adoption can meet resistance from employees who are already overwhelmed. If and when this happens, a stalemate often follows, and team leaders opt to wait it out, deferring change to another team or another time. ... The pandemic challenged us all to rethink the way we work. Investments in software took the place of physical office space, and teams were pushed to automate repeatable tasks to maintain a pre-pandemic level of efficiency. With the implementation of artificial intelligence and machine learning, workflow improvements can be expedited, lessening the need for as many employees. Technologies like low-code and no-code are easing the burden felt by developers by enabling employees outside of IT to build systems unique to their needs without the slowdown created by a backlog of IT tickets. In turn, this frees the bandwidth for developers to turn toward other pressing concerns like security.


Is it time to fire yourself?

This idea was brought to life when I interviewed Bracken Darrell, the CEO of Logitech International, a computer peripherals manufacturer headquartered in Switzerland and the US. In that conversation, he shared with me the story of how, about five years into his tenure at the company, he asked himself one Sunday night, “Am I the right person for the next five years? On paper, he certainly was, he told me, given that all his changes at the company had lifted the stock about 500%. “On the other hand, I had been involved in every single personnel and strategic decision,” he said. “My disadvantage was that I knew too much, and that I was too embedded in everything we were doing. I just thought to myself that I might be done.” So he decided that night that he was going to fire himself, but he would sleep on the decision. The punchline is that he didn’t fire himself, but he did wake up the next morning with a sense of clarity of what he needed to do: “I have to rehire myself but have no sacred cows. It was super exciting and fun, and I started changing things that I had put in place. Fortunately, I didn’t have to change things radically, but I felt new again.” 



Quote for the day:

"Risks are the seeds from which successes grow." -- Gordon Tredgold

Daily Tech Digest - May 08, 2022

Your mechanical keyboard isn't just annoying, it's also a security risk

If this has set you on edge then I have both good and bad news for you. The good news is that while this is fairly creepy, it's unlikely that hackers will be able to break into your private space and place a microphone in close enough proximity to your keyboard without you noticing. The bad news is that there are plenty of other ways that your keyboard could be giving away your private information. Keystroke capturing dongles exist that can be plugged into a keyboard’s USB cable, and wireless keyboards can be exploited using hardware such as KeySweeper, a device that can record keyboards using the 2.4GHz frequency when placed in the same room. There are even complex systems that use lasers to detect vibrations or fluctuations in powerlines to record what's being written on a nearby keyboard. Still, if you're a fan of mechanical keyboards then don't let any of this deter you, especially if you use one at home rather than in a public office environment. It's highly unlikely that you need to take extreme measures in your own home and just about everything comes with a security risk these days.


Relational knowledge graphs will transform business

"There have been many generations of algorithms built that have all been created around the idea of a binary one," said Muglia. "They have two tables with the key to join the two together, and then you get a result set, and the query optimizer takes and optimizes the order of those joins — binary join, binary join, binary join!" The recursive problems such as Fred Jones's permissions, he said, "cannot be efficiently solved with those algorithms, period." The right structure for business relationships, as distinct from data relationships, said Muglia, is a knowledge graph. "What is a knowledge graph?" asked Muglia, rhetorically. He offered his own definition for what can be a sometimes mysterious concept. "A knowledge graph is a database that models business concepts, the relationships between them, and the associated business rules and constraints." Muglia, now a board member for startup Relational AI, told the audience that the future of business applications will be knowledge graphs built on top of data analytics, but with the twist that they will use the relational calculus going all the way back to relational database pioneer E.F. Codd.


We Need to Talk about the Software Engineer Grind Culture

SWE culture can be very toxic. Generally, I found that people who get rewarded within software engineering are those who sacrifice their personal time for their project/job. We reward people who code an entire project in 24 hours (I mean, just think about the popularity of hackathons). I remember watching a TikTok from a tech creator and he said that US software engineers are paid so much not because of what they do during work hours, but because of all of the extra work they do outside of it. Ask yourself: are you paid enough to sacrifice your life outside of work? So many of us are conditioned to this rat race. I realized that this grind has caused me to lose out on any hobbies outside of coding. There are so many software engineers who are also tech creators on the side. Whether they have a twitch channel dedicated to coding, making Youtube videos about coding, or a tech content creator on TikTok, it usually has something to do with this specialization in software engineering. The reason these channels are so successful is because we, as software engineers, have bought into this narrative.


Managing Tech Debt in a Microservice Architecture

This company has a lot of dedicated and smart engineers, which most probably explains how they were able to come up with what they call the technology capability plan. I find the TCP to be a truly innovative community approach to managing tech debt. I've not seen anything like it anywhere else. That's why I'm excited about it and want to share what we have learned with you. Here is the stated purpose of the TCP. It is used by and for engineering to signal intent to both engineering and product, by collecting, organizing, and communicating the ever-changing requirements in the technology landscape for the purposes of architecting for longevity and adaptivity. In the next four slides of this presentation, I will show you how to foster the engineering communities that create the TCP. You will learn how to motivate those communities to craft domain specific plans for paying down tech debt. We will cover the specific format and purpose of these plans. We will then focus on how to calculate the risk for each area of tech debt, and use that for setting plan priorities. 


Shedding Light On Toil: Ways Engineers Can Reduce Toil

More proactive monitoring is another way to reduce toil, according to Englund and Davis. “Responding to a crash loop is responding too late,” added Davis. Instead, he advocated that SREs look toward leading indicators that suggest the potential for failure so that teams can make adjustments well before anything drastic occurs. If SLIs like error rate and latency are getting bad, you must take reactive measures to fix them, causing more toil. Instead, proactive monitoring is best to see the cresting wave before the flood. Leading indicators could arise from following things like data queue operations connected to servers or the saturation of a particular resource. “If you can figure out when you’re about to fail, you can be prepared to adapt,” said Davis. One major caveat of standardization is that you’re inevitably going to encounter edge cases that require flexibility. And when an outage or issue does arise, the remediation process is often very unique from case to case. As a result, not all investment into standardization pays out. Alternatively, teams that know how to improvise together are proven to be better equipped for unforeseen incidents


Are your SLOs realistic? How to analyze your risks like an SRE

You can reduce the impact on your users by reducing the percentage of infrastructure or users affected or the requests (e.g., throttling part of the requests vs. all of them). In order to reduce the blast radius of outages, avoid global changes and adopt advanced deployments strategies that allow you to gradually deploy changes. Consider progressive and canary rollouts over the course of hours, days, or weeks, which allow you to reduce the risk and to identify an issue before all your users are affected. Further, having robust Continuous Integration and Continuous Delivery (CI/CD) pipelines allows you to deploy and roll back with confidence and reduce customer impact. Creating an integrated process of code review and testing will help you find the issues early on before users are affected. Improving the time to detect means that you catch outages faster. As a reminder, having an estimated TTD expresses how long until a human being is informed of the problem.


5 Ways to Drive Mature SRE Practices

Project failure — and the way it’s regarded within the organization — is often as important as success. To create maximum value, SREs must be free to experiment and work on strategic projects that push the boundaries, understanding they will fail as often as they succeed. However, according to the “State of SRE Report,” only a quarter of organizations accept the “fail fast, fail often” mantra. To mature their practice, enterprises must free SREs from the traditional cost constraints placed upon IT and encourage them to challenge accepted norms. They should be setting new benchmarks for innovative design and engineering practices, not be bogged down in the minutiae of development cycles. Running hackathons and bonus schemes focused on reliability improvements is a great way to uplevel SREs and encourage an organizational culture of learning and experimentation, where failure is valued as much as success. Measurement is critical to developing any IT program, and SRE is no exception. To truly understand where performance gaps are and optimize critical user journeys, SREs need to go beyond performance monitoring data.


The Future of Data Management: It’s Already Here

Data fabric can automatically detect data abnormalities and take appropriate steps to correct them, reducing losses and improving regulatory compliance. A data fabric enables organizations to define governance norms and controls, improve risk management, and improve monitoring—something that is increasing in importance given legal standards for data governance and risk management have become more demanding and compliance/governance vital. It also enhances cost savings through the avoidance of potential regulatory penalties. A data fabric represents a fundamentally different way of connecting data. Those who have adopted one now understand that they can do many things differently, providing an excellent route for enterprises to reconsider a host of issues. Because data fabrics span the entire range of data work, they address the needs of all constituents: developers, business analysts, data scientists, and IT team members collectively. As a result, POCs will continue to grow across departments and divisions. 


Why Data Catalogs Are the Standard for Data Intelligence

Gartner positions a data catalog as the foundation “to access and represent all metadata types in a connected knowledge graph.” To illustrate, I’ll share a personal experience about why I think a data catalog is crucial to data intelligence. Some years ago, when I worked at a large global technology company, my manager said, “I want you to figure out what metrics we should measure and tell us if our product is making our customers successful. We don’t have the data or analysis today.” I was surprised. How could that be? How can a successful enterprise not have the data model in place to measure a market-leading product? Have they based their decisions on gut instinct? As part of my work, I had to create some hypotheses, gather data, analyze it, and create a recommendation. To start, I had to find an expert who had a significant amount of tribal knowledge and could explain what data existed, where it was located, what it meant, how I should use it, and what pitfalls I might encounter when using it. Next, I had to get the data from the data warehouse and write a lot of SQL queries, all while finding the data science people to get their help.


An enterprise architecture approach to ESG

Often, and especially when looked at through a holistic enterprise architecture approach, achieving or reporting on certain ESG goals (or seizing on innovative new opportunities that ESG brings about) will not be possible through isolated tech changes, but in fact, require a more holistic digital transformation. An EA-supported ESG assessment will give an accurate view of the costs and benefits of an organisation's overall IT portfolio. Architecture lenses will then help to make the decisions necessary for ESG-related digital investment and/or transformation. For example, the high energy footprint of business IT systems is becoming an increasing focus of ESG concern.6,7 As a consequence, organisations are feeling significant pressure to move to ‘clean-IT,' optimising the trade-off between energy consumption and computational performance, and incorporating algorithmic and computational efficiencies in IT solutions and designs. Meeting ESG future states will likely require digitalisation and emerging technologies such as IoT, digital twins, big data, and AI. 



Quote for the day:

"At the heart of great leadership is a curious mind, heart, and spirit." -- Chip Conley

Daily Tech Digest - May 07, 2022

The term 'digital transformation' needs a makeover: What would you rename it?

“New Ways of Working (NWoW) is our term. Of course, New Ways of Working requires quite a few catalysts in the form of culture and technology. "Culture: Retool your leadership in new ways of leading before you demand your organization be agile. Agile teams are empowered, cross-functional, and have the ability to move quickly and test and learn. The role of the leader is not to tell teams what to do but to create a fertile environment to innovate. The role of the leader is to create the outcomes and eliminate barriers. Train your leaders in these new ways of leading before you send your teams off to be agile. "Technology: Focus on agile infrastructure and data before you demand an agile work environment. Creating agile teams that are cross-functional and empowered is a good step. But this only works if you have embarked on your technical transformation and created the highways to safely and continuously deploy software. The combination of culture, technology, and agility is creating NWoW." -John Marcante, Retired CIO, Vanguard


How Weak Analogies About Software Can Lead Us Astray

Software development/design teams are simultaneously understanding problems while solving them. The team makes dozens of choices every day, ideally informed by business objectives and user testing and applied architecture and data cleanliness. ... Likewise, UX design frameworks are usually interpreted by team-level designers to fit the problem at hand. We’re constantly trading off consistent look and feel across the application suite against what will help users at this step. So in the software business, we’re usually solving and designing and implementing and fixing all at the same time. The hard part isn’t the typing, it’s the thinking. ... So hiring junior developers or offshoring to lower the average engineering rate misses what’s most important. Crafting better software should get us more customers and make us more money. Small teams of empowered developers/designers/product managers with deep understanding of real customer problems will out-earn large teams doing contextless color-by-number implementation of specs. The intrinsic quality of the work matters, which is lost in a command-and-control organization.


The key skills needed to build diversity, equality, inclusion and belonging in the workplace

It’s up to executives to treat DEIB as a central business function, instituting and scaling their efforts. Degreed CEO Dan Levin, for example, describes it as a strategic imperative to integrate DEIB into all aspects of how we operate as a business, including at board level. ... Managers need to take big picture initiatives from the C-suite and use them to allocate work and opportunities in new ways. Those adept at these skills help their staff resolve conflicts and open their minds to new ideas. ... Two skills are especially important for both senior leaders and managers, study authors Stacia Garr and Priyanka Mehrotra write in the report. Respondents at higher-ranked companies for DEIB were more likely to say that people in both positions should excel at challenging the status quo and persuasion. I’ve seen leaders and managers faced with the task of convincing those under them to reconsider how their behaviors or words might make someone else feel excluded. Those who excel at these types of challenges have the skills to do so.


How Big Companies Kill Ideas - And How To Fight Back

Google said all the right things. Then over time — after like the first six months — it became like the Tinder Swindler. I was like, “What happened? Where is all this great stuff you said we were going to have?” It went out the window. Over time we were just one toy in the toy box. When you are bought for $3.2 billion, you would think people would actually respect and invest in the team as a new area of Google’s business. That is not how it worked. Apple is a whole different story, at least when Steve [Jobs] was there. It was respected when you did stuff. People took note and tried to make successes. It was my mistake. I did not realize that Google had gone through many of those billion-dollar acquisitions and just let them flail. They just said, “Oh, that was a fun ride. Moving on.” There was no existential crisis because you always had the ad money tree from search. Then it was just a matter of cutting their losses, as opposed to seeing that these are real people with families, trying to do right on the mission to build this thing. They just saw it more as dollars, at least from the finance side. 


Maintaining a Security Mindset for the Cloud Is Crucial

When you look at networking and security, that really hasn’t kept up with the pace of the application transitions to the cloud. And if you look at what happens today, is many of these networks — and network and security elements in those networks — they are do-it-yourself. And the idea that the organizations are migrating, [that] we would be migrating from this do-it-yourself approach to as-a-service approach really allows the organizations to unleash the agility and the simplification that their organizations and enterprises are looking for. Now we have a lot of examples. Even in very recent times where these do-it-yourself approaches have failed to address the needs of the organizations, and one of the most prominent examples in the recent past is a variety of ransomware attacks. We all know that these ransomware attacks have been in the headlines in the recent news. Think about the reasons for these ransomware attacks. There could be many reasons. But one reason that I can think about is that the organizations that are hit by these ransomware attacks, and again, it’s not always black and white


The design of a data governance system

A data governance system should restore control of data to the consumers and businesses generating it, according to this BIS Paper. Technological developments over the last two decades have led to an explosion in the availability of data and their processing. Consumers often do not know the benefits of the data they generate, and find it difficult to assert their rights regarding the collection, processing and sharing of their data. We propose a data governance system that restores control to the parties generating the data, by requiring consent prior to their use by service providers. The system should be open, with consent that is revocable, granular, auditable, and with notice in a secure environment. Conditions also include purpose and use limitation, data minimisation, and retention restriction. Trust in the system and widespread adoption are enhanced by mandating specialised data fiduciaries. The experience with India's Data Empowerment Protection Architecture (DEPA) suggests that such a system can operate at scale with low transaction costs.


Embracing culture change on the path to digital transformation

We did realize that if we didn't get the culture embedded that we would not be successful. So building that capability and building the culture was number one on the list. It was five years ago. It feels like a very long time ago to me. But we started that process and through the cloud guild we trained 7,000 people in cloud and 2,700 of those today are industry certified and working in our teams. So we've made really good progress. We've actually moved a lot of the original teams that were a bit hesitant, a bit concerned about having to move to this whole new way of working. And remember that our original teams didn't have a lot of tech skills, so to tell them that they were going to have to take on all of this technical accountability, an operational task that had previously been handed to our outsourcers, was daunting. And the only way we were going to overcome that was to build confidence. And we built confidence through education, through a lot of cultural work, a lot of explaining the strategy, a lot of explaining to people what good looked like in 2020, and how we were going to get to that place.


6 blockchain use cases for cybersecurity

Blockchain technology digitizes and distributes record-keeping across a network, so transaction verification processes no longer rely on a single central institution. Blockchains are always distributed but vary widely in permissions, sizes, roles, transparency, types of participants and how transactions are processed. A decentralized structure offers inherent security benefits because it eliminates the single point of failure. Blockchains are also composed of several built-in security qualities, such as cryptography, public and private keys, software-mediated consensus, contracts and identity controls. These built-in qualities offer data protection and integrity by verifying access, authenticating transaction records, proving traceability and maintaining privacy. These configurations enhance blockchain's position in the confidentiality, integrity and availability triad by offering improved resilience, transparency and encryption. Blockchains, however, are designed and built by people, which means they're subject to human error, bias or exposure based on use case, subversion and malicious attacks.


Secrets to building a healthy CISO-vendor partnership

Any partnership is a two-way street, so as well as knowing what they are looking for themselves, it’s also important for CISOs to understand what a security vendor needs from them in return. “To build a strong relationship and deliver the best experience possible, we need our customers to be open and honest with us,” Rech says. “This honesty should extend to being clear on which other vendors are in the mix as they’re increasingly relying on flexible, cloud-native, open solutions.” The reality is that no one vendor can guarantee protection against every threat, Rech adds, but vendors are uniquely positioned to adapt to a business’s needs when they have full clarity of what those needs are. For example, constantly sharing information on threat groups, attack techniques or sector-specific threat trends can be overwhelming for some CISOs. “When we know more about their business and their priorities, we can direct the most relevant, need-to-know information to them.” Hellickson thinks vendors also benefit from reasonable, respectful feedback during a sales process that can become somewhat frustrating for CISOs.


Top 10 business needs driving IT spending today

“Cybersecurity [spend] has always been growing, but it has transformed from perimeter security that we’ve been used to for 40 years to more and more securing cloud and remote work and remote employees,” says John Lovelock, research vice president and distinguished analyst at Garner. “Companies that used to be able to put the virtual brick walls around the building and say they’re secure on the inside now have too many openings — to the cloud, partners, customers, employees — for that strategy to be viable.” ... Other big business needs driving IT spending increases — such as boosting efficiency, customer experience, employee productivity, and profitability — also say something about where organizations are in 2022, experts say. “You have an enhanced discipline about cost management now and being smart about where you spend your tech dollars,” Priest says, adding that “it’s one of the best places to invest, especially in inflationary periods.” He says organizations are looking to automate, streamline operations, and reduce costs to help deal with an unsettled labor market, worker shortages, inflation, and geopolitical uncertainty. 



Quote for the day:

"When we lead from the heart, we don't need to work on being authentic we just are!" -- Gordon Tredgold

Daily Tech Digest - May 06, 2022

If you want to make it big in tech, these are the skills you really need

Technical skills are not the only thing businesses need. Increasingly, employers are looking for candidates with the qualities and attributes that can bring teams together, make them more productive, and help companies navigate a work landscape that can change at a moment's notice: qualities that have proven indispensable in getting employers through the tumult of the COVID-19 pandemic.  ... According to the recruitment specialist, tech workers, particularly at middle and senior levels, are now expected to be business partners, and as such they need to be able to clearly communicate their strategies, activities and the impact of those on the wider business. This means good communication skills and interpersonal skills are more valuable than ever – particularly for companies that have had to adopt or scale out digital solutions quickly in response to pandemic-era working. "There are businesses out there that are tech businesses now that perhaps weren't before," says Phil Boden, Robert Half's director of permanent placement services, technology. 


Is Storage-as-Code the Next Step in DevOps?

“Large storage teams and IT organizations are looking to move into this kind of model,” he said. “People are excited to get out of that drudgery piece and build something as code.” And while developers aren’t the decision-makers or the budget holders for the storage market, Ferrario says, they are also a key influencer audience. “The IT developer knows they are responsible for building and automating their own infrastructure services,” he said. “And while they don’t hold the purse strings, they are the executors.” This is a logical trend to follow the popular Kubernetes abstraction, Ferrario said; there’s a widespread demand for infrastructure to be generic enough for everyone to access what they need to build, without having to bug infrastructure engineers all the time. Move faster, with guardrails and policy in place. “If you look at the origin of the cloud operating model years ago, the infrastructure that you as a developer or app owner need is on-demand — and you don’t have to worry about what’s going on behind the scenes,” Ferrario said. But when it’s on-premises, the process is still manual. “You need that Infrastructure-as-a-Service in place, with policy definition and so on.”


3 ways building digital acumen can impact business success

Seeking to build digital acumen skills across the organization has provided several opportunities for cross-functional career moves and peer mentoring. Our IT colleagues are taking opportunities to lead and hone the soft skills they need today, like design thinking and agile working methods. In our manufacturing plants, for instance, digital procedures help to minimize the potential for human error because they strengthen our work processes and improve reliability. This data is vital to making timely decisions, whether someone is performing maintenance or an inspection. Our IT team is teaching plant employees how to use those tools because they play a critical role in developing the capabilities and maintaining them in the long term. With 130 different manufacturing sites with multiple plants at each site and tens of thousands of procedures, it has a key impact on productivity and reliability when employees have digital skills on the field versus needing to rely on the IT organization. Other areas in which our IT team is helping to build digital acumen include sales, marketing, and public affairs. 


Can't Fight That REvil Ransomware Feeling Anymore?

None of REvil's likely now-former, core members appear to have been brought to justice. Perhaps that's because they reside in Russia, which has historically ignored cybercrime, provided the criminals never hack Russia or its neighbors, as well as do the occasional favor in return. The new version of REvil's business plan may simply be to bring that name recognition to bear as the group attempts to scare as many victims as possible into paying a seven-figure ransom. The ideal scenario for criminals is that victims pay, quickly and quietly, to avoid news of the attack becoming public, which helps attackers by making their efforts more difficult for law enforcement agencies to trace. If the ransomware group now using the REvil brand name can keep the operation afloat for even a month before again getting disrupted by law enforcement agencies, its members stand to make a serious profit, so long as they remain out of jail long enough to spend it. Unfortunately, the odds are on REvil Rebooted's side. 


9 most important steps for SMBs to defend against ransomware attacks

Investigate whether you can retire out of date servers. Microsoft recently released a toolkit to allow customers to possibly get rid of the last Exchange Server problem. For years the only way to properly administer mailboxes in Exchange Online where the domain uses Active Directory (AD) for identity management was to have a running Exchange Server in the environment to perform recipient management activities. ... The role eliminates the need to have a running Exchange Server for recipient management. In this scenario, you can install the updated tools on a domain-joined workstation, shut down your last Exchange Server, and manage recipients using Windows PowerShell. ... Investigate the consultants and their access. Attackers look for the weak link and often that is an outside consultant. Always ensure that their remote access tools are patched and up to date. Ensure that they understand that they are often the entry point into a firm and that their actions and weaknesses are introduced into the firm as well. Discuss with your consultants what their processes are.


Delta: A highly available, strongly consistent storage service using chain replication

Fundamentally, chain replication organizes servers in a chain in a linear fashion. Much like a linked list, each chain involves a set of hosts that redundantly store replicas of objects. Each chain contains a sequence of servers. We call the first server the head and the last one the tail. The figure below shows an example of a chain with four servers. Each write request gets directed to the head server. The update pipelines from the head server to the tail server through the chain. Once all the servers have persisted the update, the tail responds to the write request. Read requests are directed only to tail servers. What a client can read from the tail of the chain replicates across all servers belonging to the chain, guaranteeing strong consistency. ... Delta supports horizontal scalability by adding new servers into the bucket and smartly rebalancing chains to the newly added servers without affecting the service’s availability and throughput. As an example, one tactic is to have servers with the most chains transfer some chains to new servers as a way to rebalance the load.


Leaving cloud scalability to automation

The pushback on automated scalability, at least “always” attaching it to cloud-based systems to ensure that they never run out of resources, is that in many situations the operations of the systems won’t be cost-effective and will be less than efficient. For example, an inventory control application for a retail store may need to support 10x the amount of processing during the holidays. The easiest way to ensure that the system will be able to automatically provision the extra capacity it needs around seasonal spikes is to leverage automated scaling systems, such as serverless or more traditional autoscaling services. The issues come with looking at the cost optimization of that specific solution. Say an inventory application has built-in behaviors that the scaling automation detects as needing more compute or storage resources. Those resources are automatically provisioned to support the additional anticipated load. However, for this specific application, behaviors that trigger a need for more resources don’t actually need more resources. 


Ethernet creator Metcalfe: Web3 will have all kinds of 'network effects'

Metcalfe is still refining his pitch for his Law and learning at the same time. "There are going to be all kinds of network effects in Web3," said Metcalfe, during an informal gathering in Williamsburg, Brooklyn, on the sidelines of The Knowledge Graph conference, a conference where enthusiasts of knowledge graphs share technology and techniques and best practices. "For the first time, I am trying to say exactly what kinds of value are created by networks," Metcalfe told ZDNet at the Williamsburg event. "What I have learned today is that knowledge graphs can go a lot farther if they are decentralized," said Metcalfe. "The key is the connectivity." Earlier in the day, Metcalfe had given a talk at the KGC main stage, "Network Effects in Web3." In the talk, Metcalfe explained that "networks are valuable," in many ways. They offer value as "collecting data," said Metcalfe, the ability to get data from many participants. There was also sharing value, sharing disk drives, say, or sharing files. Netflix, said Metcalfe, has "distribution value — they distribute content and it's valuable."


NOAA seeks input on new satellite sensors and digital twin

“The ultimate goal is to improve the forecast skills of NOAA,“ Sid Boukabara, principal scientist at NOAA’s Satellite and Information Service Office of System Architecture and Advanced Planning, told SpaceNews. “These technologies have the potential to take us a leap forward in our ability to provide good data to our customers.” Gathering data in the microwave portion of the electromagnetic spectrum is a key ingredient of accurate weather forecasts. NOAA currently relies on the Northrop Grumman Advanced Technology Microwave Sounder, which gathers data in 22 channels, flying on polar-orbiting weather satellites. Future microwave sounders could “sample at a much higher spectral resolution and would have potentially hundreds of channels,” Boukabara said. “By having a lot more channels, we will be able to better measure the temperature and moisture in the atmosphere.” Measuring the vertical distribution of atmospheric wind from space is another NOAA goal. For now, meteorologists determine wind direction and intensity by observing the motion of moisture in the atmosphere.


4 Database Access Control Methods to Automate

The beauty of using security automation as a data broker is that it has the ability to validate data-retrieval requests. This includes verifying that the requestor actually has permission to see the data being requested. If the proper permissions aren’t in place, the user can submit a request to be added to a specific role through the normal request channels, which is typically the way to go. With automated data access control, this request could be generated and sent within the solution to streamline the process. This also allows additional context-specific information to be included in the data-access request automatically. For example, if someone requests data that they do not have access to within their role, the solution can be configured to look up the database owner, populate an access request and send it to the owner of the data, who can then approve one-time access or grant access for a certain period of time. A common scenario where this is useful is when an employee goes on vacation and someone new is helping with their clients’ needs while they are out.



Quote for the day:

"A positive attitude will not solve all your problems. But it will annoy enough people to make it worth the effort" -- Herm Albright

Daily Tech Digest - May 05, 2022

Being a responsible CTO isn’t just about moving to the cloud

The reasons for needing to be a responsible CTO are just as strong as the need to be a tech-savvy one if a company wants to thrive in a digital economy. There are many facets to being a responsible CTO, such as making sure that code is being written in a diverse way, and that citizen data is being used appropriately. In a BCS webinar, IBM fellow and vice-president for technology in EMEA, Rashik Parmar, summarised that the three biggest forces driving unprecedented change today included post-pandemic work; digitalisation; and the climate emergency. With many organisations turning to technology to help solve some of the biggest challenges they’re facing today, it’s clear that there will need to be answers about how this tech-heavy economy will impact the environment. It makes sense that this is often the first place that a CTO will start when deciding how to drive a more responsible future. ... If we focus on the environmental considerations, it’s becoming more commonly known that whilst a move to the cloud may be better for reducing an organisation’s carbon emissions than running multiple on-premises systems, the initiative alone isn’t going to spell good news for climate change.


Frozen Neon Invention Jolts Quantum Computer Race

The group's experiments reveal that within optimization, the new qubit can already stay in superposition for 220 nanoseconds and change state in only a few nanoseconds, which outperform qubits based on electric charge that scientists have worked on for 20 years. "This is a completely new qubit platform," Jin says. "It adds itself to the existing qubit family and has big potential to be improved and to compete with currently well-known qubits." The researchers suggest that by developing qubits based on an electron's spin instead of its charge, they could develop qubits with coherence times exceeding one second. They add the relative simplicity of the device may lend itself to easy manufacture at low cost. The new qubit resembles previous work creating qubits from electrons on liquid helium. However, the researchers note frozen neon is far more rigid than liquid helium, which suppresses surface vibrations that can disrupt the qubits. It remains uncertain how scalable this new system is—whether it can incorporate hundreds, thousands or millions of qubits.


AI for Cybersecurity Shimmers With Promise, but Challenges Abound

There are definitely differences in opinions between business executives, who largely consider AI to be a perfect solution, and security analysts on the ground, who have to deal with the day-to-day reality, says Devo's Ollmann. "In the trenches, the AI part is not fulfilling the expectations and the hopes of better triaging, and in the meantime, the AI that is being used to detect threats is working almost too well," he says. "We see the net volume of alerts and incidents that are making it into the SOC analysts hands is continuing to increase, while the capacity to investigate and close those cases has remained static." The continuing challenges that come with AI features mean that companies still do not trust the technology. A majority of companies (57%) are relying on AI features more or much more than they should, compared with only 14% who do not use AI enough, according to respondents to the survey. In addition, few security teams have turned on automated response, partly because of this lack of trust, but also because automated response requires a tighter integration between products that just is not there yet, says Ollman.


Concerned about cloud costs? Have you tried using newer virtual machines?

“Customers are willing to pay more for newer GPU instances if they deliver value in being able to solve complex problems quicker,” he wrote. Some of this can be chalked up to the fact that, until recently, customers looking to deploy workloads on these instances have had to do so on dedicated GPUs, as opposed to renting smaller virtual processing units. And while Rogers notes that customers, in large part, prefer to run their workloads this way, that may be changing. Over the past few years, Nvidia — which dominates the cloud GPU market — has, for one, introduced features that allow customers to split GPUs into multiple independent virtual processing units using a technology called Multi-instance GPU or MIG for short. Debuted alongside Nvidia’s Ampere architecture in early 2020, the technology enables customers to split each physical GPU into up to seven individually addressable instances. And with the chipmaker’s Hopper architecture and H100 GPUs, announced at GTC this spring, MIG gained per-instance isolation, I/O virtualization, and multi-tenancy, which open the door to their use in confidential computing environments.


Attackers Use Event Logs to Hide Fileless Malware

The ability to inject malware into system’s memory classifies it as fileless. As the name suggests, fileless malware infects targeted computers leaving behind no artifacts on the local hard drive, making it easy to sidestep traditional signature-based security and forensics tools. The technique, where attackers hide their activities in a computer’s random-access memory and use a native Windows tools such as PowerShell and Windows Management Instrumentation (WMI), isn’t new. What is new is new, however, is how the encrypted shellcode containing the malicious payload is embedded into Windows event logs. To avoid detection, the code “is divided into 8 KB blocks and saved in the binary part of event logs.” Legezo said, “The dropper not only puts the launcher on disk for side-loading, but also writes information messages with shellcode into existing Windows KMS event log.” “The dropped wer.dll is a loader and wouldn’t do any harm without the shellcode hidden in Windows event logs,” he continues. “The dropper searches the event logs for records with category 0x4142 (“AB” in ASCII) and having the Key Management Service as a source.


Fortinet CEO Ken Xie: OT Business Will Be Bigger Than SD-WAN

"We definitely see OT as a bigger market going forward, probably bigger than SD-WAN," Xie tells investors Wednesday. "The growth is very, very strong. We do see a lot of potential, and we also have invested a lot in this area to meet the demand." Despite its potential, Fortinet's OT practice today is considerably smaller than its SD-WAN business, which has been a company priority for years. SD-WAN accounted for 16% of Fortinet's total billings in the quarter ended Dec. 31 while OT accounted for just 8% of total billings over that same time period. Fortinet last summer had the second-largest SD-WAN market share in the world, trailing only Cisco. Fortinet's OT success coincides with growing demand from manufacturers, which CFO Keith Jensen says is the one vertical that continues to stand out for the company. ... "The strength in manufacturing really speaks to the threat environment, ransomware, OT, and things of that nature," Jensen says. "Manufacturing is trying desperately to break into the top five of our verticals and it's getting closer and closer every quarter."


Meta has built a massive new language AI—and it’s giving it away for free

Meta AI says it wants to change that. “Many of us have been university researchers,” says Pineau. “We know the gap that exists between universities and industry in terms of the ability to build these models. Making this one available to researchers was a no-brainer.” She hopes that others will pore over their work and pull it apart or build on it. Breakthroughs come faster when more people are involved, she says. Meta is making its model, called Open Pretrained Transformer (OPT), available for non-commercial use. It is also releasing its code and a logbook that documents the training process. The logbook contains daily updates from members of the team about the training data: how it was added to the model and when, what worked and what didn’t. In more than 100 pages of notes, the researchers log every bug, crash, and reboot in a three-month training process that ran nonstop from October 2021 to January 2022. With 175 billion parameters (the values in a neural network that get tweaked during training), OPT is the same size as GPT-3. This was by design, says Pineau. 


Tackling the threats posed by shadow IT

Shadow IT can be tough to mitigate, given the embedded culture of hybrid working in many organizations, in addition to a general lack of engagement from employees with their IT teams. For staff to continue accessing apps securely from anywhere, at any time, and from any device, businesses must evolve their approach to organizational security. Given the modern-day working environment moves at such a fast pace, employees have turned en masse to shadow IT when the experience isn’t quick or accurate enough. This leads to the bypassing of secure networks and best practices and can leave IT departments out of the process. A way of controlling this is by deploying corporate managed devices that provide remote access, giving IT teams most of the control and removing the temptation for employees to use unsanctioned hardware. Providing them with compelling apps, data, and services with a good user experience should see a reduced dependence on shadow IT, putting IT teams back in the driving seat and restoring security. 


5 AI adoption mistakes to avoid

Every AI-related business goal begins with data – it is the fuel that enables AI engines to run. One of the biggest mistakes companies make is not taking care of their data. This begins with the misconception that data is solely the responsibility of the IT department. Before data is captured and input into AI systems, business subject matter experts and data scientists should be looped in, and executives should provide oversight to ensure the right data is being captured and maintained appropriately. It’s important for non-IT personnel to realize they not only benefit from good data in yielding quality AI recommendations, but their expertise is a critical input to the AI system. Make sure that all teams have a shared sense of responsibility for curating, vetting, and maintaining data. Data management procedures are also a key component of data care. ... AI requires intervention to sustain it as an effective solution over time. For example, if AI is malfunctioning or if business objectives change, AI processes need to change. Doing nothing or not implementing adequate intervention could result in AI recommendations that hinder or act contrary to business objectives.


SEC Doubles Cyber Unit Staff to Protect Crypto Users

The SEC says that the newly named Crypto Assets and Cyber Unit, formerly known as the Cyber Unit, in the Division of Enforcement, will grow to 50 dedicated positions. "The U.S. has the greatest capital markets because investors have faith in them, and as more investors access the crypto markets, it is increasingly important to dedicate more resources to protecting them," says SEC Chair Gary Gensler. This dedicated unit has successfully brought dozens of cases against those seeking to take advantage of investors in crypto markets, he says. ... "This is great news! A lot of the cryptocurrency market is against any regulations, including those that would safeguard their own value, but that's not the vast majority of the rest of the world. The cryptocurrency world is full of outright scams, criminals and ne'er-do-well-ers," says Roger Grimes, data-driven defense evangelist at cybersecurity firm KnowBe4. Grimes adds that even legal and very sophisticated financiers and investors are taking advantage of the immaturity of the cryptocurrency market.



Quote for the day:

"The very essence of leadership is that you have to have vision. You can't blow an uncertain trumpet." -- Theodore M. Hesburgh

Daily Tech Digest - May 04, 2022

The cloud data migration challenge continues - why data governance is job one

How can governance help? The role of governance is to define the rules and policies for how individuals and groups access data properties and the kind of access they are allowed. Yet people in an organization rarely operate according to well-defined roles. They perform in multiple roles, often provisionally. On-ramping has to happen immediately; off-ramping has to be a centralized function. One very large organization we dealt with discovered that departing employees still had access to critical data for seven to nine days! So how can data governance support more intelligent data security? After all, without governance, security would be arbitrary. Many organizations that employ security schemes struggle because such schemes tend to be either too loose or too tight and almost always too rigid (insufficiently dynamic). In this way, security can hinder the progress of the organization. Yet, given the complexity of data architecture today, it’s become impossible to manage security for individuals without a coherent and dynamic governance policy to drive security allowance or grants for exceptions to those rules. 


Cybersecurity and the Pareto Principle: The future of zero-day preparedness

There’s a good reason why software asset inventory and management is the second-most important security control, according to the Centers for Internet Security’s (CIS) Critical Security Controls. It’s “essential cyber hygiene” to know what software is running and being able to access that up-to-date information instantaneously. It’s as though you were a new master-at-arms for a local baron in the Middle Ages. Your first duty would be to map out the castle grounds that you are charged to protect. ... As we put Log4Shell behind us, let’s incorporate these lessons learned for a more prepared future. The allocation of resources by enterprise security teams needs to be more purposeful, as attackers become increasingly sophisticated and continue to have what feels like unlimited resources. The value added through clear visibility and real-time insights into your entire ecosystem becomes all the more important. Remember, the core scope of the security team is to create a secure IT ecosystem, mitigate the exploit of known vulnerabilities and monitor for any suspicious activity. 


Expect to see more online data scraping, thanks to a misinterpreted court ruling

What can and should IT do about that? Given that these are generally publicly-visible pages, it’s a problem. There are few technical methods to block scrapers that wouldn’t cause problems for the site visitors the enterprise wants. Years ago, I was managing a media outlet that was making a huge move to premium content, meaning that readers would now have to pay for selected premium stories. We ran into a problem. We couldn’t allow people to freely share premium content, as we needed people to buy those subscriptions. That meant that we blocked cut-and-paste and specifically blocked someone from saving the page as a PDF. But that meant that those pages also couldn’t be printed. (Saving as PDF is really printing to PDF, so blocking PDF downloads meant blocking all printers.) It took just a couple of hours before new premium subscribers screamed that they paid for access and they need to be able to print pages and read them at home or on a train. After quite a few subscribers threatened to cancel their paid subscriptions, we surrendered and reinstated the ability to print.


Unpatched DNS Bug Puts Millions of Routers, IoT Devices at Risk

The flaw affects the ubiquitous open-source Apache Log4j framework—found in countless Java apps used across the internet. In fact, a recent report found that the flaw continues to put millions of Java apps at risk, though a patch exists for the flaw. Though it affects a different set of targets, the DNS flaw also has a broad scope not only because of the devices it potentially affects, but also because of the inherent importance of DNS to any device connecting over IP, researchers said. DNS is a hierarchical database that serves the integral purpose of translating a domain name into its related IP address. To distinguish the responses of different DNS requests aside from the usual 5-tuple–source IP, source port, destination IP, destination port, protocol–and the query, each DNS request includes a parameter called “transaction ID.” The transaction ID is a unique number per request that is generated by the client and added in each request sent. It must be included in a DNS response to be accepted by the client as the valid one for request, researchers noted. “Because of its relevance, DNS can be a valuable target for attackers,” they observed.



Managed services vs. hosted services vs. cloud services: What's the difference?

Managed service providers (MSPs) existed first - before we were talking about the big public cloud providers. “I’ve seen some definitions where MSPs are a superset and all CSPs are MSPs, but not all MSPs are CSPs. That seems a reasonable definition to me,” says Miniman. One historical example of a managed service provider you may know is Rackspace: Their company name literally reflected that you were buying space in their rack to run workloads. The way their business started out was as a hosted service: Your server ran in Rackspace’s data center. But Rackspace also offered other types of services to customers - managed services. ... “When I think of a hosted environment, that is something dedicated to me,” says Miniman. “So traditionally, there was a physical machine…that maybe had a label on it. But definitely from a security standpoint, it was “company X is renting this machine that is dedicated to that environment.” Public cloud service providers sell hundreds of services: You can think of those as standard tools, just like you’d find standard metric tools walking into any hardware store.


Making Agile Work in Asynchronous and Hybrid Environments

The ideal state for asynchronous teams is to remain aligned passively - or with little effort - eliminating the need for frequent meetings or lengthy documentation of the minutiae of every project. To pull this off, visual collaboration should be a key element of Agile management for teams that are working remotely and asynchronously. Visual collaboration brings the ease of alignment of the whiteboard into the digital workplace, giving developers a living artifact of project plans that can include diagrams, UX mockups, embedded videos, and other communication tools that can make async work nearly error-proof. Our team at Miro uses a variety of visual tools to manage our development, and many of these tools are available as free templates that other teams can use. The agile product roadmap helps prioritize work and shift tasks as priorities change. And the product launch board helps our team visually align design, development, and GtM teams as we come down to the wire on a new launch. The shared nature of these tools gives us confidence as we work.

Three steps to an effective data management and compliance strategy

Businesses clearly need to know more about their data to meet compliance needs, but the challenge is sorting through the noise in all the volume. Data analytics is essential for enterprises looking to increase efficiency, improve business decision-making and attain that important competitive edge while still ensuring that they comply with today’s data standards. However, while big data can add significant value to the decision-making process, supporting large volumes of unstructured data can be complex, as inadequate data management and data protection introduce unacceptable levels of risk. The emergence of DataOps, which is an automated and process-oriented methodology aimed at improving the quality of data analytics, further supports the requirement for enhanced data management. Driving faster and more comprehensive analytics is key to leveraging value from data, but this can only be done if data is managed correctly, the right governance protocols are in place, and data quality is kept to the highest standard.


5 key industries in need of IoT security

The growth of IoT has spurred a rush to deploy billions of devices worldwide. Companies across key industries have amassed vast fleets of connected devices, creating gaps in security. Today, IoT security is overlooked in many areas. For example, a sizable percentage of devices share the userID and password of “admin/admin” because their default settings are never changed. The reason security has become an afterthought is that most devices are invisible to organizations. Hospitals, casinos, airports, cities, etc. simply have no way of seeing every device on their networks. ... Cities rely on 1.1 billion IoT devices for physical security, operating critical infrastructure from traffic control systems, street lights, subways, emergency response systems and more. Any breach or failure in these devices could pose a threat to citizens. You see it in the movies: brilliant hackers control the traffic lights across a city, with perfect timing, to guide an armored vehicle into a trap. Then there’s real life; for instance, when a hacker in Romania took control of Washington DC’s outside video cameras days before the Trump inauguration.


Getting strategy wrong—and how to do it right instead

Making matters more complex, especially in areas of public policy and defense, real-life leaders do not have a neat economist’s single measure of value. Instead, they are faced with a bundle of conflicting ambitions—a group of desires, goals, intents, values, and fears—that cannot all be satisfied simultaneously. Forging a sense of purpose from this bundle is part of the gnarly problem. Making matters most complex is the fact that the connection between potential actions and actual outcomes is unclear. A gnarly challenge is not solved with analysis or the application of preset frameworks. A coherent response arises only through a process of diagnosing the nature of the challenges, framing, reframing, chunking down the scope of attention, referring to analogies, and developing insight. The result is a design, or creation, embodying purpose. I call it a creation because it is often not obvious at the start, the product of insight and judgment rather than an algorithm. Implicit in the concept of insightful design is that knowledge, though required, is not, by itself, sufficient.


Understand the 3 P’s of Cloud Native Security

The movement to shift security left has empowered developers to find and fix defects early so that when the application is pushed into production, it is as free as possible from known vulnerabilities at that time… But shifting security left is just the beginning. Vulnerabilities arise in software components that are already deployed and running. Organizations need a comprehensive approach that spans left and right, from development through production. While there’s no formulaic one-size-fits-all way to achieve end-to-end security, there are some worthwhile strategies that can help you get there. ... Shifting left can help organizations develop applications with security in mind. But no matter how confident you are in the security of an application when it leaves development, there is no guarantee that it will remain secure in production. We have seen on a large scale that vulnerabilities are often disclosed well after being deployed to production. Reminders include Apache Struts, Heartbleed, and, most recently, Log4j, which was first published in 2013 but discovered just last year.




Quote for the day:

"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor