Daily Tech Digest - May 17, 2022

Only DevSecOps can save the metaverse

We’ve previously talked about “shifting left,” or DevSecOps, the practice of making security a “first-class citizen” when it comes to software development, baking it in from the start rather than bolting it on in runtime. Log4j, SolarWinds, and other high-profile software supply chain attacks only underscore the importance and urgency of shifting left. The next “big one” is inevitably around the corner. A more optimistic view is that far from highlighting the failings of today’s development security, the metaverse might be yet another reckoning for DevSecOps, accelerating the adoption of automated tools and better security coordination. If so, that would be a huge blessing to make up for all the hard work. As we continue to watch the rise of the metaverse, we believe supply chain security should take center stage and organizations will rally to democratize security testing and scanning, implement software bill of materials (SBOM) requirements, and increasingly leverage DevSecOps solutions to create a full chain of custody for software releases to keep the metaverse running smoothly and securely.


EU Parliament, Council Agree on Cybersecurity Risk Framework

"The revised directive aims to remove divergences in cybersecurity requirements and in implementation of cybersecurity measures in different member states. To achieve this, it sets out minimum rules for a regulatory framework and lays down mechanisms for effective cooperation among relevant authorities in each member state. It updates the list of sectors and activities subject to cybersecurity obligations, and provides for remedies and sanctions to ensure enforcement," according to the Council of the EU. The directive will also establish the European Union Cyber Crises Liaison Organization Network, EU-CyCLONe, which will support the coordinated management of large-scale cybersecurity incidents. The European Commission says that the latest framework is set up to counter Europe's increased exposure to cyberthreats. The NIS2 directive will also cover more sectors that are critical for the economy and society, including providers of public electronic communications services, digital services, waste water and waste management, manufacturing of critical products, postal and courier services and public administration, both at a central and regional level.


Catalysing Cultural Entrepreneurship in India

What constitutes CCIs varies across countries depending on their diverse cultural resources, know-how, and socio-economic contexts. A commonly accepted understanding of CCIs comes from the United Nations Educational, Scientific and Cultural Organization (UNESCO), which defines this sector as “activities whose principal purpose is production or reproduction, promotion, distribution or commercialisation of goods, services, and activities of a cultural, artistic, or heritage-related nature.”, CCIs play an important role in a country’s economy: they offer recreation and well-being, while spurring innovation and economic development at the same time. First, a flourishing cultural economy is a driver of economic growth as attaching commercial value to cultural products, services, and experiences leads to revenue generation. These cultural goods and ideas are also contributors to international trade. Second, although a large workforce in this space is informally organised and often unaccounted for in official labour force statistics, cultural economies are some of the biggest employers of artists, craftspeople, and technicians. 


Rethinking Server-Timing As A Critical Monitoring Tool

Server-Timing is uniquely powerful, because it is the only HTTP Response header that supports setting free-form values for a specific resource and makes them accessible from a JavaScript Browser API separate from the Request/Response references themselves. This allows resource requests, including the HTML document itself, to be enriched with data during its lifecycle, and that information can be inspected for measuring the attributes of that resource! The only other header that’s close to this capability is the HTTP Set-Cookie / Cookie headers. Unlike Cookie headers, Server-Timing is only on the response for a specific resource where Cookies are sent on requests and responses for all resources after they’re set and unexpired. Having this data bound to a single resource response is preferable, as it prevents ephemeral data about all responses from becoming ambiguous and contributes to a growing collection of cookies sent for remaining resources during a page load.


Scalability and elasticity: What you need to take your business to the cloud

At a high level, there are two types of architectures: monolithic and distributed. Monolithic (or layered, modular monolith, pipeline, and microkernel) architectures are not natively built for efficient scalability and elasticity — all the modules are contained within the main body of the application and, as a result, the entire application is deployed as a single whole. There are three types of distributed architectures: event-driven, microservices and space-based. ... For application scaling, adding more instances of the application with load-balancing ends up scaling out the other two portals as well as the patient portal, even though the business doesn’t need that. Most monolithic applications use a monolithic database — one of the most expensive cloud resources. Cloud costs grow exponentially with scale, and this arrangement is expensive, especially regarding maintenance time for development and operations engineers. Another aspect that makes monolithic architectures unsuitable for supporting elasticity and scalability is the mean-time-to-startup (MTTS) — the time a new instance of the application takes to start. 


Proof of Stake and our next experiments in web3

Proof of Stake is a next-generation consensus protocol to secure blockchains. Unlike Proof of Work that relies on miners racing each other with increasingly complex cryptography to mine a block, Proof of Stake secures new transactions to the network through self-interest. Validator's nodes (people who verify new blocks for the chain) are required to put a significant asset up as collateral in a smart contract to prove that they will act in good faith. For instance, for Ethereum that is 32 ETH. Validator nodes that follow the network's rules earn rewards; validators that violate the rules will have portions of their stake taken away. Anyone can operate a validator node as long as they meet the stake requirement. This is key. Proof of Stake networks require lots and lots of validators nodes to validate and attest to new transactions. The more participants there are in the network, the harder it is for bad actors to launch a 51% attack to compromise the security of the blockchain. To add new blocks to the Ethereum chain, once it shifts to Proof of Stake, validators are chosen at random to create new blocks (validate).


Is NLP innovating faster than other domains of AI

There have been several stages in the evolution of the natural language processing field. It started in the 80s with the expert system, moving on to the statistical revolution, to finally the neural revolution. Speaking of the neural revolution, it was enabled by the combination of deep neural architectures, specialised hardware, and a large amount of data. That said, the revolution in the NLP domain was much slower than other fields like computer vision, which benefitted greatly from the emergence of large scale pre-trained models, which, in turn, were enabled by large datasets like ImageNet. Pretrained ImageNet models helped in achieving state-of-the-art results in tasks like object detection, human pose estimation, semantic segmentation, and video recognition. They enabled the application of computer vision to domains where the number of training examples is small, and annotation is expensive. One of the most definitive inventions in recent times was the Transformers. Developed at Google Brains in 2017, Transformers is a novel neural network architecture and is based on the concept of the self-attention mechanism. The model outperformed both recurrent and convolutional models. 

Before you get too excited about Power Query in Excel Online, though, remember one important difference between it and a Power BI report or a paginated report. In a Power BI report or a paginated report, when a user views a report, nothing they do – slicing, dicing, filtering etc – affects or is visible to any other users. With Power Query and Excel Online however you’re always working with a single copy of a document, so when one user refreshes a Power Query query and loads data into a workbook that change affects everyone. As a result, the kind of parameterised reports I show in my SQLBits presentation that work well in desktop Excel (because everyone can have their own copy of a workbook) could never work well in the browser, although I suppose Excel Online’s Sheet View feature offers a partial solution. Of course not all reports need this kind of interactivity and this does make collaboration and commenting on a report much easier; and when you’re collaborating on a report the Show Changes feature makes it easy to see who changed what.


Observability Powered by SQL: Understand Your Systems Like Never Before With OpenTelemetry Traces and PostgreSQL

Given that observability is an analytics problem, it is surprising that the current state of the art in observability tools has turned its back on the most common standard for data analysis broadly used across organizations: SQL. Good old SQL could bring some key advantages: it’s surprisingly powerful, with the ability to perform complex data analysis and support joins; it’s widely known, which reduces the barrier to adoption since almost every developer has used relational databases at some point in their career; it is well-structured and can support metrics, traces, logs, and other types of data (like business data) to remove silos and support correlation; and finally, visualization tools widely support it. ... You're probably thinking that observability data is time-series data that relational databases struggle with once you reach a particular scale. Luckily, PostgreSQL is highly flexible and allows you to extend and improve its capabilities for specific use cases. TimescaleDB builds on that flexibility to add time-series superpowers to the database and scale to millions of data points per second and petabytes of data.


Why cyber security can’t just say “no“

Ultimately, IT security is all about keeping the company safe from damages — financial damages, operational damages, reputational and brand damages. You’re trying to prevent a situation that not only will harm the company’s well-being, but also that of its employees. That is why we need to explain the actual threats and how incidents occur. Explain what steps can be taken to lower the chances and impact of those incidents occurring and show them how they can be part of that. People love learning new things, especially if it has something to do with their daily work. Explain the tradeoffs that are being made, at least in high-level terms. Explain how quickly convenience, such as running a machine as an administrator, can lead to abuse. Not only will the companies appreciate you for your honesty, but they will have the right answer the next time the question comes up. They’ll think along the constraints and find new ways of adding value to the business, while removing factors from their daily work that might result in one less incident down the line.



Quote for the day:

"Real leadership is being the person others will gladly and confidently follow." -- John C. Maxwell

Daily Tech Digest - May 16, 2022

OAuth Security in a Cloud Native World

As you integrate OAuth into your applications and APIs, you will realize that the authorization server you have chosen is a critical part of your architecture that enables solutions for your security use cases. Using up-to-date security standards will keep your applications aligned with security best practices. Many of these standards map to company use cases, some of which are essential in certain industry sectors. APIs must validate JWT access tokens on every request and authorize them based on scopes and claims. This is a mechanism that scales to arbitrarily complex business rules and spans across multiple APIs in your cluster. Similarly, you must be able to implement best practices for web and mobile apps and use multiple authentication factors. The OAuth framework provides you with building blocks rather than an out-of-the-box solution. Extensibility is thus essential for your APIs to deal with identity data correctly. One critical area is the ability to add custom claims from your business data to access tokens. Another is the ability to link accounts reliably so that your APIs never duplicate users if they authenticate in a new way, such as when using a WebAuthn key.


APIs Outside, Events Inside

It goes without saying that external clients of an application calling the same API version — the same endpoint — with the same input parameters expect to see the same response payload over time. The need of end users for such certainty is once again understandable but stands in stark contrast to the requirements of the DA itself. In order for distributed applications to evolve and grow at the speed required in today’s world, those autonomous development teams assigned to each constituent component need to be able to publish often-changing, forward-and-backward-compatible payloads as a single event to the same fixed endpoints using a technique I call "version-stacking." ... A key concern of architects when exposing their applications to external clients via APIs is — quite rightly — security. Those APIs allow external users to affect changes within the application itself, so they must be rigorously protected, requiring many and frequent authorization steps. These security steps have obvious implications for performance, but regardless, they do seem necessary.

 

More money for open source security won’t work

The best guarantor of open source security has always been the open source development process. Even with OpenSSF’s excellent plan, this remains true. The plan, for example, promises to “conduct third-party code reviews of up to 200 of the most critical components.” That’s great! But guess what makes something a “critical component”? That’s right—a security breach that roils the industry. Ditto “establishing a risk assessment dashboard for the top open source components.” If we were good at deciding in advance which open source components are the top ones, we’d have fewer security vulnerabilities because we’d find ways to fund them so that the developers involved could better care for their own security. Of course, often the developers responsible for “top open source components” don’t want a full-time job securing their software. It varies greatly between projects, but the developers involved tend to have very different motivations for their involvement. No one-size-fits-all approach to funding open source development works ...


Prepare for What You Wish For: More CISOs on Boards

Recently, the Security Exchange Commission (SEC) made a welcome move for cybersecurity professionals. In proposed amendments to its rules to enhance and standardize disclosures regarding cybersecurity risk management, strategy, governance, and incident reporting, the SEC outlined requirements for public companies to report any board member’s cybersecurity expertise. The change reflects a growing belief that disclosure of cybersecurity expertise on boards is important as potential investors consider investment opportunities and shareholders elect directors. In other words, the SEC is encouraging U.S. public companies to beef up cybersecurity expertise in the boardroom. Cybersecurity is a business issue, particularly now as the attack surface continues to expand due to digital transformation and remote work, and cyber criminals and nation-state actors capitalize on events, planned or unplanned, for financial gain or to wreak havoc. The world in which public companies operate has changed, yet the makeup of boards doesn’t reflect that.


12 steps to building a top-notch vulnerability management program

With a comprehensive asset inventory in place, Salesforce SVP of information security William MacMillan advocates taking the next step and developing an “obsessive focus on visibility” by “understanding the interconnectedness of your environment, where the data flows and the integrations.” “Even if you’re not mature yet in your journey to be programmatic, start with the visibility piece,” he says. “The most powerful dollar you can spend in cybersecurity is to understand your environment, to know all your things. To me that’s the foundation of your house, and you want to build on that strong foundation.” ... To have a true vulnerability management program, multiple experts say organizations must make someone responsible and accountable for its work and ultimately its successes and failures. “It has to be a named position, someone with a leadership job but separate from the CISO because the CISO doesn’t have the time for tracking KPIs and managing teams,” says Frank Kim, founder of ThinkSec, a security consulting and CISO advisory firm, and a SANS Fellow.


The limits and risks of backup as ransomware protection

One option is to use so-called “immutable” backups. These are backups that, once written, cannot be changed. Backup and recovery suppliers are building immutable backups into their technology, often targeting it specifically as a way to counter ransomware. The most common method for creating immutable backups is through snapshots. In some respects, a snapshot is always immutable. However, suppliers are taking additional measures to prevent these backups being targeted by ransomware. Typically, this is by ensuring the backup can only be written to, mounted or erased by the software that created it. Some suppliers go further, such as requiring two people to use a PIN to authorise overwriting a backup. The issue with snapshots is the volume of data they create, and the fact that those snapshots are often written to tier one storage, for reasons of rapidity and to lessen disruption. This makes snapshots expensive, especially if organisations need to keep days, or even weeks, of backups as a protection against ransomware. “The issue with snapshot recovery is it will create a lot of additional data,” says Databarracks’ Mote.


Four ways towards automation project management success

Having a fundamental understanding of the relationship between problem and outcome is essential for automation success. Process mining is one of the best options a business has to expedite this process. Leyla Delic, former CIDO at Coca Cola İçecek, eloquently describes process mining as a “CT scan of your processes”, taking stock and ensuring that the automation that you want to implement is actually problem-solving for the business. With process mining one should expect to need to go in and try blindly at first, learn what works, and only then expand and scale for real outcomes. A recent Forrester report found that 61% of executive decision-makers either are, or are looking at, using process mining to simplify their operations. Constructing a detailed, end-to-end understanding of processes provides the necessary basis to move from siloed, specific task automation to more holistic process automation – making a tangible impact. With the most advanced tools available today, one can even understand in real-time the actual activities and processes of knowledge workers across teams and tools, and receive automatic recommendations on how to improve work.


The Power of Decision Intelligence: Strategies for Success

While chief information officers and chief data officers are the traditional stakeholders and purchase decision makers, Kohl notes that he’s seeing increased collaboration between IT and other business management areas when it comes to defining analytics requirements. “Increasingly, line-of-business executives are advocating for analytics platforms that enable data-driven decision making,” he says. With an intelligent decisioning strategy, organizations can also use customer data -- preferably in real time -- to understand exactly where they are on their journeys -- be it an offer for a more tailored new service, or outreach with help if they’re behind on a payment. Don Schuerman, CTO of Pega, says this helps ensure that every interaction is helpful and empathetic, versus just a blind email sent without any context. In the same way that a good intelligence integration strategy can benefit customers, the ability to analyze employee data and understand roadblocks in their workflows helps solve for these problems faster and create better processes, resulting in happier, more productive employees.


Digital exhaustion: Redefining work-life balance

As workers continue to create and collaborate in digital spaces, one of the best things we can do as leaders is to let go. Let go of preconceived schedules, of always knowing what someone is working on, of dictating when and how a project should be accomplished – in effect, let go of micromanagement. Instead, focus on hiring productive, competent workers and trust them to do their jobs. Don’t manage tasks – gauge results. Use benchmarks and deadlines to assess effectiveness and success. This will make workers feel more empowered and trusted. Such “human-centric” design, as Gartner explains, emphasizes flexible work schedules, intentional collaboration, and empathy-based management to create a sustainable environment for hybrid work. According to Gartner’s evaluation, a human-centric approach to work stimulates a 28 percent rise in overall employee performance and a 44% decrease in employee fatigue. The data supports the importance of recognizing and reducing the impacts of digital exhaustion.


Late-Stage Startups Feel the Squeeze on Funding, Valuations

Investors are now tracking not only a prospect's burn rate but also their burn multiple, which Sekhar says measures how much cash a startup is spending relative to the amount of ARR it is adding each year. As a result, he says, deals that last year took two days to get done are this year taking two weeks since investors are engaging in far more due diligence to ensure they're betting on a quality asset. "We've seen this in the past where companies spend irresponsibly and just run off a cliff expecting that they'll raise yet another round," Sekhar says. "I think we're going back to basics and focusing on building great businesses." Midstage and late-stage security startups have begun examining how many months of capital they have and whether they should slow hiring to buy more time to prove their value, Scheinman says. Startups want to extend how long they can operate before they have to approach investors for more money, given all the uncertainty in the market, he says. As a result, Scheinman says, venture-backed firms have cut back on hiring and technology purchases and placed greater emphasis on hitting their sales numbers. 



Quote for the day:

"Ninety percent of leadership is the ability to communicate something people want." -- Dianne Feinstein

Daily Tech Digest - May 15, 2022

Compliance Is A Crucial Part Of Digital Transformation—Here’s How To Achieve It

Staying legally compliant should be a priority for any small or medium-sized business looking to remain up and running. For entrepreneurs or those just entering the business environment, learning and understanding compliance may seem daunting. ... Legal compliance must be a top priority, and hiring the right legal counsel can provide your business with vital information to ensure compliance. Stay updated on state and federal regulations According to the U.S. Small Business Administration (SBA), there are a few key areas of compliance businesses should be aware of, including: Internal requirements; Ongoing state filing requirements; Licenses, permits, and recertifications; and Ongoing federal filing requirements. The internet is chock-full of information regarding SMB compliance. It’s also a good idea to consider consulting professional services to help with compliance management. Human resources (HR) professionals are typically well versed in compliance, so use them as resources, too. ... The final tip to remain compliant as an SMB is to use a centralized location for all company communications. Using one platform for all communications makes interactions more efficient and less confusing for employees. 


The Most Important Cybersecurity Step to Implement This Year

In our experience, passwords are prone to user error and difficult to regulate properly. Even complex passwords can be easily bypassed, especially if they’ve been part of a prior security breach. The point is, if a bad actor wants to get into your network, they will target your users’ passwords first -- and very often, they’ll succeed. ... MFA completely changes the password game. Instead of a simple string of text, MFA also requires an additional proof of identity to gain access to an account. Some examples include a PIN sent to your phone, a fingerprint scan, or a mobile authentication app. MFA makes most forms of login credential attacks exponentially harder. In many cases, there’s a 99 percent improvement in your team’s security ... all by adding just a single additional click! There’s really no good reason to ignore MFA. Passwords are so exposed -- and so crucial to identity access management -- that MFA is now a must-have. In fact, MFA is now required by both cyber-insurance providers and multiple compliance standards for government, medical, and manufacturing work. Unless a business employs MFA, renewing cyber-insurance coverage or getting new coverage is often next to impossible these days. It used to be a nice bonus, but now it’s a minimum requirement.


Enterprise Architecture Is A Foundational Skill For The Engineering Students

Nowadays, engineering graduates and post-graduates usually attain a cursory knowledge of Information Technologies and Information Systems during their curriculum as the majority of the educational programs followed in universities are not in conjunction with Business Informatics, which is an integral requirement for today’s digital organizations. There is a demand for professionals who possess in-depth knowledge in both technical and business spheres. They are required to not only manage the development of products efficiently, but also understand the business context and work to improve the business function by aligning IT with business drivers. This is why the Enterprise Architect’s role is increasing in importance to the business and provides an anchor in a sea of change. Before we move on, let’s do due diligence and get to know what Enterprise Architecture is. It is the process by which organizations standardize and organize IT infrastructure to align with the business goals. These strategies support digital transformation, IT growth, and the modernization of IT as a department.


How new API tools are transforming API management

APIs are taking over the world, revolutionizing the way your enterprise organizes IT, and giving you new ways to reach and secure lots of customers. They are powering supply chains and are re-shaping the value chain. According to a recent Nordic APIs statistics roundup, over 90% of developers are using APIs and they spend nearly 30% of their time coding them. This clearly illustrates how important APIs have become for businesses, but also how much impact they have on the workload of IT professionals. In the wake of the massive growth of API adoption there has been a surge in both launches and funding of API-centric start-ups. Many focus on innovating business services like communication services, payment processing, anti-fraud services, banking services etc. Others offer technical capabilities that zoom in on the needs of API providers and consumers - the developers, which begs the question how these tools complement full lifecycle API management solutions like webMethods API Management. Full lifecycle API management supports all stages of an API's lifecycle, from planning and design through implementation and testing to deployment and operation. It is a cornerstone of your digital business capabilities. 


Four use cases defining the new wave of data management

As the public becomes more aware of how AI is used within organizations, greater scrutiny is being placed upon models. Any semblance of bias – particularly as it relates to race, gender or socioeconomic status – has the potential to erase years of goodwill. Yet, even beyond public optics and moral imperatives, being able to trust AI implementations and easily explain why models arrived at certain results in better business decisions. The data fabric helps enable MLOps and Trustworthy AI by establishing trust in data, trust in models and trust in processes. Trust in data is created with the help of many capabilities noted earlier that deliver high-quality data that’s ready for self-service consumption by those who should have access. Trust in models relies upon MLOps-automated data science tools with built in transparency and accountability at each stage of the model lifecycle. Finally, trust in processes through AI governance delivers consistent repeatable processes that assist not only with model transparency and traceability but also time-to-production and scalability.


Data Quality Metrics: Importance and Utilization

Metrics and KPIs (key performance indicators) are often confused. Key performance indicators are a way of measuring performance over a period of time, while working toward a specific goal. KPIs supply target goals for teams, and milestones to measure progress. Metrics, on the other hand, uses dimensions to measure the quality of data. It is, unfortunately, easy to use the terms interchangeably, but they are not the same thing. Key performance indicators can help developing an organization’s strategy and focus. Metrics is more of a “business as usual” measurement system. A KPI is one kind of metric. ... Business organizations struggle to adapt to the flood of new technologies and data processing techniques. The ability to not only adjust to changing circumstances, but to eclectically embrace the best of those technologies and techniques, can lead to long-term improvements, help to minimize work stress, and increase profits. Using high-quality data for decision-making can be the difference between success and failure. The key goals of a business are to become more profitable and successful, and high data quality can help to achieve those goals.


An In-Depth Guide on the Types of Blockchain Nodes

Full nodes are responsible for maintaining the entire transaction records in a blockchain network. They are regarded as the blockchain’s servers where the data is stored and maintained. There are several governance models of a blockchain that full nodes can come under. If there are any improvements to be made to a blockchain, a majority of full nodes must be ready for it. So, it can be concluded that full nodes are given voting power in order to make any changes in a blockchain. However, certain scenarios can also arise when a change is not implemented even after the majority of full nodes agree to the change. It can happen when a big decision has to be made. ... Pruned nodes are given a specific memory capacity to store data. This means that any number of blocks can be added, but a full node can store only a limited number of bocks. To maintain the ledger, pruned nodes can keep on downloading the block till it reaches the specified limit. Once the limit is attained, the node starts deleting the oldest blocks and making space for new ones in order to maintain the blockchain’s size. 


Crypto-assets and Decentralized Finance: A Primer

There also are a range of other activities—mostly occurring off the blockchain—that are linked to this simplified DeFi structure. These include asset management, automated trading bots, supply of data that are required inputs into conditional smart contracts, and blockchain governance arrangements (such as votes taken to determine the evolving structure of the blockchain). (In the language of DeFi, the suppliers of external data such as asset prices are known as “oracles.”) There also a range of other off-chain providers—including exchanges and app developers—who combine many of these activities to facilitate retail and wholesale access to the DeFi system. To understand the mechanics of DeFi, it is useful to think of a smart contract as a vending machine. After someone identifies the quantity and type of the items they wish, and provides payment, the machine dispenses the desired objects. Indeed, this type of protocol is quite common even in TradFi. For example, crediting accounts with interest payments on a regular schedule requires that the bank’s operations receive signals on the interest rate and the date.


5 reference architecture designs for edge computing

Latency can be a major problem for applications that depend upon real-time access to data. Edge computing, which places computing near the user's or data source's physical location, is a way to deliver services faster and more reliably while gaining flexibility from hybrid cloud computing. This speed is vital in industries such as healthcare, utilities, telecom, and manufacturing. There are three categories of edge use cases: The first is called enterprise edge, and it allows customers to extend application services to remote locations. It has a core enterprise data store located in a datacenter or as a cloud resource. The second is operations edge, which focuses on analyzing inputs in real time (from Internet of Things sensors, for example) to provide immediate decisions that result in actions. For performance reasons, this generally happens onsite. This kind of edge is a place to gather, process, and act on data. The third category is provider edge, which manages a network for others, as in the case of telecommunications service providers. This type of edge focuses on creating a reliable, low-latency network with computing environments close to mobile and fixed users.


Mapping the Future Part 4: Technical Roadmaps

There is a give and take that must be accounted for to align the technical execution with the business planning. This is what makes the technical roadmap so important: It takes the ideas and validates the feasability of them. This give and take is dependent on two constraints: budget and available resources. Budget planning can be difficult. There is always a need to control costs, but at the same time, you need to invest in the future. This is where the strategy and capability roadmap are important. They provide a lens through which the budget decision-making can be performed. The budget limits what can be done. What capabilities are most important to implement? What technologies are truly required to support the capabilities? What is the return on investment? This latter question can be difficult to answer. Traditionally ROI has been analyzed on a per-project basis. But when we are talking about technologies and capabilities, an individual project may cross capabilities, or a capability may require several dependent projects before the ROI is realized.



Quote for the day:

"If a leader loves you, he makes sure you build your house on rock." -- Ugandan Proverb

Daily Tech Digest - May 14, 2022

Non-Cloud Native Companies: How the Developer Experience Can Make Digital Transformation Easier

To force the cultural change, Infrastructure and DevOps teams might be trying their best to serve the developer, but walking a mile in someone else’s shoes isn’t easy even with the best of intent. Consider cross-pollinating the teams, rotating a few individuals every so often, as the permanent state. That way, those creating the developer experience will have to experience it themselves, which tends to blow up any feeling of pride in one’s creation. In the opposite direction, the application developer gets to explain the problems inside the DevOps team in a much more effective way than in a series of meetings. Above all, the tactic helps the overall culture of collaboration in a more effective way than I’ve seen result from any insistence by management that “we’re one team”. Furthermore, application developers crave understanding what they are trying to accomplish and problem solving in light of it. A happy developer is one who works directly with business people who define the goals, use creativity to solve them, and experience the results. An unhappy developer is one who builds something dictated without understanding why, and never finds out if it worked.


Present and Future of the Microservice Architecture

Ultimately, the advantage of microservices is that it decouples development, it reduces developmental coupling so that teams can make progress more independently of one another. Otherwise, it's just a service oriented architecture. It's not microservices. That decoupling is important. One of the things that I like in most definitions of microservices is that people say they should be aligned with a bounded context. That makes sense to me. I was chatting with Eric Evans about this a couple of weeks ago, and he came up with an idea that resonated with me, which is that the messaging layer is a separate bounded context. I think multiple separate bounded contexts. You have the bounds of the service, and then the messaging is something else. The protocol of exchanging information between the services is another abstraction. One of the things that resonates with me, another thing from Eric's book, is that you always translate when you're crossing bounded context. We should be translating the messages as they go across. Then that makes the example that Holly came up with an easier problem to deal with, where we have these ideas that are sometimes the same and sometimes different and sometimes related.


Threat Actors Use Telegram to Spread ‘Eternity’ Malware-as-a-Service

Eternity—which researchers discovered on a TOR website, where the malware-as-a-service also is for sale—demonstrates the “significant increase in cybercrime through Telegram channels and cybercrime forums,” researchers wrote in the post. This is likely because threat actors can sell their products without any regulation, they said. Each module is sold individually and has different functionality that researchers suspect is being repurposed from code in an existing Github repository, which project developers are then modifying and selling under a new name, according to Cyble. “Our analysis also indicated that the Jester Stealer could also be rebranded from this particular Github project which indicates some links between the two threat actors,” they wrote. ... Threat actors are selling the Eternity Worm, a virus that spreads through infected machines via files and networks, for $390. Features of the worm include its ability to spread through the following: USB Drives, local network shares, various local files, cloud drives such as GoogleDrive or DropBox, and others. It also can send worm-infected messages to people’s Discord and Telegram channels and friends, researchers said.


Digital transformation on the CEO agenda

There are three rules of thumb that seem to be evolving. First is that companies that get the most value from this actually spend a lot of effort thinking about, “What are the new digital businesses to launch? How can we create new value with new products and new customers versus transforming the existing business processes?” There’s sort of a duality—you should spend as much focus on new digital business building as you do on transforming the current business. Rule of thumb number two is, you’ve got to focus on things that are big enough. And maybe that’s obvious, but it sometimes surprises us how many people will call something a digital transformation, and you add up the total economic impact, and it’s less than, say, 15 or 20 percent of the company’s overall EBITDA. If you’re not targeting at least 15 or 20 percent, in our mind it’s hard to call that a transformation and to sustain the level of organizational focus around it. And then the third rule of thumb is, it’s best to start with a concentration in a particular area rather than sprinkle a little bit of digital or a handful of analytics use cases broadly across the organization. 


Intro to Micronaut: A cloud-native Java framework

Micronaut delivers a slew of benefits gleaned from older frameworks like Spring and Grails. It is billed as "natively cloud native," meaning that it was built from the ground up for cloud environments. Its cloud-native capabilities include environment detection, service discovery, and distributed tracing. Micronaut also delivers a new inversion-of-control (IoC) container, which uses ahead-of-time (AoT) compilation for faster startup. AoT compilation means the startup time doesn't increase with the size of the codebase. That's especially crucial for serverless and container-based deployments, where nodes are often shut down and spun up in response to demand. Micronaut is a polyglot JVM framework, currently supporting Java, Groovy, and Kotlin, with Scala support underway.  ... One cloud-native concept that Micronaut supports is the federation. The idea of a federation is that several smaller applications share the same settings and can be deployed in tandem. If that sounds an awful lot like a microservices architecture, you are right. The purpose is to make microservice development simpler and keep it manageable. See Micronaut's documentation for more about federated services.


4 Best Practices for Microservices Authorization

In the past, most authorization decisions have happened at the gateway — and developers can still enforce authorization there for microservices, if they like. However, for security, performance and availability, it’s typically preferable to also enforce authorization steps for each microservice API. As mentioned, in a zero-rust architecture, every request must be both authenticated and authorized before it is allowed. It’s entirely possible to send each of these authorization requests to a centralized service. However, this can add significantly to latency — for instance, a single user request might traverse numerous services, and if each of those requests requires an additional network hop to reach that centralized authorization engine, that can hamper the user experience. If you’re using a tool like OPA, fortunately, you can also run a local authorization engine and policy library as a sidecar to each microservice. Here is an example of what this architecture looks like with an Istio service mesh, which uses an Envoy proxy sidecar. Using this model, you can ensure that each request passes muster with an authorization check while maximizing the performance and availability of the service.


Just in time? Bosses are finally waking up to the cybersecurity threat

"Today boards say, 'Can you come and brief our board, and can you stay while the CISO's briefing the board? And can you please give us a view about the quality of our controls and our estimation of risk?', which is hugely transparent," she said, speaking at the UK National Cyber Security Centre's (NCSC) Cyber UK conference in Newport, Wales. "I see that as well, it feels as if it's really maturing," said Lindy Cameron, CEO of the NCSC. "We've been trying really hard over the last few months to get organisations to step up but not panic, do the things we've asked them to for a long time and take it more seriously". The NCSC regularly issues advice to organisations on how to improve and manage cybersecurity issues, ranging from ransomware threats to potential nation state-backed cyberattacks – and Cameron said she's seen a more hands-on approach to cybersecurity from business leaders in recent months. "I've seen chief execs really asking their CISOs the right questions, rather than leaving them to it because they don't have to understand complex technology. It does feel like a much more engaging strategic conversation," she said.


Center for Threat-Informed Defense, Microsoft, and industry partners streamline MITRE ATT&CK® matrix evaluation for defenders

The methodology and insights from the top techniques list has many practical applications, including helping prioritize activities during triage. As it’s applied to more real-world scenarios, we can identify areas of focus and continue to improve our coverage on these TTPs and behaviors of prevalent threat actors. Refining the criteria can further increase results accuracy and make this project more customer-focused and more relevant for their immediate action. ... This collaboration and innovation benefits everyone in the security community, not only those who use the MITRE ATT&CK framework as part of their products and services, but also our valued ecosystem of partners who build services on top of our platform to meet the unique needs of every organization, to advance threat-informed defense in the public interest. Microsoft is a research sponsor at the Center for Threat-Informed Defense, partnering to advance the state of the art in threat-informed defense in the public interest. One of our core principles at Microsoft is security for all, and we will continue to partner with MITRE and the broader community to collaborate on projects like this and share insights and intelligence.


How Waterfall Methodologies Stifle Enterprise Agility

Traditional organizational architecture can impose limitations on an enterprise’s ability to successfully reach its digital transformation goals. The up-front model, with a focus on one long-range project, can slow productivity and choke creativity. While planning is needed as agility scales, the detailed technology life cycles with large timeline projections are no longer effective or profitable in meeting the business mandates that drive enterprises forward. Enterprise leaders are increasingly abandoning the five-year architectural plan for one that is designed to evolve with the ever-changing software development environment. Enterprise architects must now develop and promote adaptive methods that support agility in order to appropriate the value of new technologies like AI, machine learning, big data, IoT and intuitive tools that enable advanced analytics and enterprise-wide collaboration. A less intentional architecture, decomposed into smaller units, can be managed by autonomous cross-functional teams that are accountable to peers and managers with shared strategic objectives, bringing all fields into a coherent whole.


Seven Ways to Fail at Microservices with Holly Cummins

We're starting to use CICD as a noun rather than a verb and we think it's something that we can buy and then put on the shelf and then we have CICD. But if we sort of think about the words in CICD it's continuous integration and continuous delivery or deployment, confusingly. And so what I often see is I'll see teams where they're using feature branches and they'll integrate their feature branch once a week. So that of course is not continuous integration. It's better than every six months, but it's fundamentally not continuous. And really, I think if you're doing continuous integration, which you should be, everybody should be aiming to integrate at least once a day. And that does mean that you have to have some different habits in terms of your code, you sort of need to start coding with the things that aren't visible and then go on to the things that are visible and other things like that. You need to make sure that your quality's in place so that you've got the tests in place first so that you don't accidentally deliver something terrible.



Quote for the day:

"Leaders know the importance of having someone in their lives who will unfailingly and fearlessly tell them the truth." -- Warren G. Bennis

Daily Tech Digest - May 12, 2022

SD-WAN and Cybersecurity: Two Sides of the Same Coin

SD-WAN is a natural extension of NGFWs that can leverage these devices’ content/context awareness and deep packet inspection. The same classification engines used by NGFWs to drive security decisions can also determine the best links to send traffic over. These engines can also guide queueing priorities, which in turn enables fine-grained quality-of-service (QoS) controls. ... Centralized cloud management is key to enabling incremental updates of these new features. Further, flexible policy-driven routing enables service chaining of new security features in the cloud rather than building these features into the SD-WAN customer premises equipment (CPE). For example, cloud-based services for advanced malware detection, secure web gateways, cloud-access security brokers, and other security features can be enabled via the SD-WAN platform, seamlessly bringing these and other next-gen security functions across the enterprise. The coordination between the cloud-based SD-WAN service and the on-premises SD-WAN CPE allows new security applications to benefit from both the convenience and proximity of an on-site device and the near-infinitely scalable computing power of the cloud.


Introducing AlloyDB for PostgreSQL: Free yourself from expensive, legacy databases

As organizations modernize their database estates in the cloud, many struggle to eliminate their dependency on legacy database engines. In particular, enterprise customers are looking to standardize on open systems such as PostgreSQL to eliminate expensive, unfriendly licensing and the vendor lock-in that comes with legacy products. However, running and replatforming business-critical workloads onto an open source database can be daunting: teams often struggle with performance tuning, disruptions caused by vacuuming, and managing application availability. AlloyDB combines the best of Google’s scale-out compute and storage, industry-leading availability, security, and AI/ML-powered management with full PostgreSQL compatibility, paired with the performance, scalability, manageability, and reliability benefits that enterprises expect to run their mission-critical applications. As noted by Carl Olofson, Research Vice President, Data Management Software, IDC, “databases are increasingly shifting into the cloud and we expect this trend to continue as more companies digitally transform their businesses. ...”


Visualizing the 5 Pillars of Cloud Architecture

If you understand your cloud infrastructure, you can more confidently ensure your customers can rely on your organization. With the ability to constantly meet your workload demands and quickly recover from any failures, your customers can count on you to consistently meet their service needs with little interruption to their experience. A great way to increase reliability in your cloud infrastructure is to set key performance indicators (KPIs) that allow you to both monitor your cloud and alert the proper team members when something within the architecture fails. Using a cloud visualization platform to filter your cloud diagrams and create different visuals of current, optimal and potential cloud infrastructure allows you to compare what is currently happening in the cloud to what should be happening. ... Many factors can impact cloud performance, such as the location of cloud components, latency, load, instance size and monitoring. If any of these factors become a problem, it’s essential to have procedures in place that result in minimal deficiencies in performance. 


Zero Trust Does Not Imply Zero Perimeter

Don’t get me wrong, the concept of trusting the perimeter is fairly old-school/outdated and does come into conflict with more modern “cloud native” approaches. Remote users will also have issues with latency, especially if you require the users to VPN to your on-premises network and finally establish connectivity with the cloud. The theoretical modern approach is to not trust that perimeter. This doesn’t mean you have to get rid of it, but rather it’s not the default, since increasingly the perimeter is becoming more porous and ill-defined. This is as opposed to when moving to a “zero-trust” model, where everything needs to be proven for both the user identity and device prior to any data, application, assets and/or services (DAAS) being permitted to communicate to any services. Going further down memory lane, back in the day the perimeter used to mean that everything was located within your “castle” and perimeter-based system access was “all or nothing” by default. Once users were in, they were in, which also applies to any other type of actor, including malicious actors. Once the perimeter was breached, the malicious actor effectively had unlimited access to everything within the perimeter.


As Inflation Skyrockets, Is Now the Time to Pull Back on New IT Initiatives?

There are two big risks associated with pulling back, says Ken Englund, technology sector leader at business advisory firm EY Americas. Pulling back on projects may increase the risk of IT talent turnover, he warns. “Pausing or changing priorities for tactical, short-term reasons may encourage talent to depart for opportunities on other companies' transformational programs.” Also, given current inflationary pressure, “the cost to restart a project may be materially more expensive in the future than it is to complete today.” There's no doubt that pulling back on IT spend saves money over the short term, but short-sighted savings could come at the cost of long-term success. “If an organization must look to cut budgets, start with a strategic review of all projects, identifying which have the greatest possible impact and least amount of risk,” Lewis-Pinnell advises. Examine each project's total cost of ownership and rank them by cost and impact. Strategic selection of IT initiatives can help IT leaders manage through inflationary challenges. “Don’t be afraid to cut projects that aren’t bringing you enough benefit,” she adds.


Cyber-Espionage Attack Drops Post-Exploit Malware Framework on Microsoft Exchange Servers

CrowdStrike's analysis shows the modules are designed to run only in-memory to reduce the malware's footprint on an infected system — a tactic that adversaries often employ in long-running campaigns. The framework also has several other detection-evasion techniques that suggest the adversary has deep knowledge of Internet Information Services (IIS) Web applications. For instance, CrowdStrike observed one of the modules leveraging undocumented fields in IIS software that are not intended to be used by third-party developers. Over the course of their investigation of the threat, CrowdStrike researchers saw evidence of the adversaries repeatedly returning to compromised systems and using IceApple to execute post-exploitation activities. Param Singh, vice president of CrowdStrike's Falcon OverWatch threat-hunting services, says IceApple is different from other post-exploitation toolkits in that it is under constant ongoing development even as it is being actively deployed and used. 


Zero Trust, Cloud Adoption Drive Demand for Authorization

Hutchinson advises enterprises to leverage a model that combines traditional coarse-grained role-based access rules, or RBAC, with a collection of finer-grained attributes-based access rules, or ABAC, that can describe not only the consumer of a service but also the data, system, environment and function. "While traditional RBAC models are easier for developers and auditors to understand, they usually result in role explosion as the system struggles to provide finer-grained authorization. ABAC addresses that fine-grained need but sacrifices both management and understanding as the vast array of elements necessary for such a system makes organizing the data extremely complex," says Hutchinson. He adds: "A complex policy rule might say: 'A customer's transactional data can only be viewed via a secure device at a bank branch by an accredited teller who is from the same country of origin as the customer.' Instead of creating a plethora of new roles to cover all of the different possible combinations, I can use the teller role while also checking attributes that will provide device profile, location, accreditation status and country of origin.


The Cloud Native Community Needs to Talk about Testing

After getting feedback from the community, including DevOps and QA engineers, the general consensus I received was that it’s easy to tell that cloud native is a developing field that is still establishing its best practices. We can look into different examples of areas that are still maturing. Not that long ago, we started to hear about DevOps, which brought the concept of shorter and more efficient release cycles, which today feels like a normal standard. More recently, we saw GitOps following the same tracks, and we are seeing that more teams are using Git to manage their infrastructure. It’s my belief that cloud native testing will soon follow suit, where teams will not see testing as a burden or an extra amount of work that is only “nice to have” but something that is part of the process that will save them a lot of development time. I’m sure all of you reading this are tech enthusiasts like me and probably have been building and shipping products for quite some time, and I’m also sure many of you noticed that there are major challenges with integration testing on Kubernetes, especially when it comes to configuring tests in your continuous integration/continuous delivery (CI/CD) pipelines to follow a GitOps approach.


Hybrid work: Best practices in times of uncertainty

Humans are social creatures who require some contact with others, but determining the right balance between proximity and contact in the virtual workplace is difficult – too much contact can be exhausting, and too little can lead to isolation. Work to find a balance that can help support your staff as they navigate the nuanced world of remote work. It’s also important to adopt a blended approach to technology and physical space. A combination of co-working spaces and telepresence tools can be just what you need to facilitate contact and collaboration among employees. This allows for an open environment where people can both collaborate and decompress in their own way while also bringing a sense of connection that may be impossible to achieve in a virtual environment. ... It’s not easy to develop policies that address both business and human needs in remote and hybrid work environments, but one thing remains certain: flexibility paired with autonomy is essential for success. CIOs play a critical role in creating an environment of flexibility and autonomy for staff members – one that can help support their professional development while also fostering increased satisfaction and success.


10 best practices to reduce the probability of a material breach

Cybersecurity is as much about humans as it is about technology. Organizations see fewer breaches and faster times to respond when they build a “human layer” of security, create a culture sensitive to cybersecurity risks, build more effective training programs, and develop clear processes for recruiting and retaining cyber staff. ... Organizations with no breaches invest in a mix of solutions, from the fundamentals such as email security and identity management, to more specialized tools such as security information and event management systems (SIEMs). These organizations are also more likely to take a multi-layered, multi-vendor security approach to monitor and manage risks better through a strong infrastructure. ... With digital and physical worlds converging, the attack surfaces for respondents are widening. Organizations that prioritize protection of interconnected IT and OT assets experience fewer material breaches and faster times to detect and respond.



Quote for the day:

"Good leaders make people feel that they're at the very heart of things, not at the periphery." -- Warren G. Bennis

Daily Tech Digest - May 11, 2022

Doing data warehousing the wrong way

Ask enterprises how they feel about their data warehouses, and a high percentage express dissatisfaction. They struggle to load data. They have unstructured data but the data warehouse can’t handle it, etc. These aren’t necessarily problems with the data warehouse, however. I’d hazard a guess that usually, the dissatisfaction arises from trying to force the data warehouse (or analytical database if you prefer) to do something for which it’s not well suited. Here’s one way the error starts, according to Sammer: By now, everyone has seen the rETL (reverse ETL) trend: You want to use data from app #1 (say, Salesforce) to enrich data in app #2 (Marketo, for example). Because most shops are already sending data from app #1 to the data warehouse with an ELT tool like Fivetran, many people took what they think was a shortcut, doing the transformation in the data warehouse and then using an rETL tool to move the data out of the warehouse and into app #2. The high-priced data warehouses and data lakes, ELT, and rETL companies were happy to help users deploy what seemed like a pragmatic way to bring applications together, even at serious cost and complexity.


5 ways AI can help solve the privacy dilemma

Protecting privacy while allowing the economy to flourish is a data challenge. AI, machine learning, and neural networks have already transformed our lives, from robots to self-driving cars to drug development to a generation of smart assistants that will never double book you. There is no doubt that AI can power solutions and platforms that protect privacy while giving people the digital experiences they want and allowing businesses to profit. What are those experiences? It’s simple and intuitive to every Internet user. We want to be recognized only when it makes our lives easier. That means recognizing me so I don’t have to go through the painful process of re-entering my data. It means giving me information — and yes, serving me an ad — that is timely, relevant, and aligns with my needs. The opportunities within the “personalization economy,” as I call it, are vast. McKinsey published two white papers about the size of the opportunity and how to do it right. Interestingly — and tellingly — the word “privacy” isn’t mentioned a single time in either of those white papers. That oversight is remarkable and overlooks the tension between privacy and personalization.

Building a Strong Business Case for Security and Compliance

Cybersecurity is not a service or product; it is prudent to show how protecting an organisation from losses is the only way for any financial benefit to be gained. Try to communicate to the board in numbers, for example, show that a £1 investment would stop a security event that could potentially cost £10 to the company. That way, it should be possible to get the board to vote on your side by demonstrating the business case and return on investment in security measures and protection. In order for the board to determine their investment decision in security, you should give them data that focuses on any threat vectors that are already evident, such as inadequate services for security awareness and employee training, processes and policies that are not adequately applied and recorded or a lack of data backup practices and patching updates. Formulating a risk/reward equation using a tiered security approach is a good way forward, as you can then direct investments towards incident response and detecting compliance. Once you have created a robust and compelling business case for your organisation, you need to share the proposal with the board.


How to Stop Failing at Data

Data projects are doomed when the people who plan and the people who execute don’t have the same tools, the same access, or even the same goals. Data scientists are really good at asking the right questions and running exploratory models, but they don’t know how to scale. Meanwhile, data engineers are experts at making data pipelines that scale, but they don’t know how to find the insights. We’ve been using tools that require such a high level of specialist expertise that it’s impossible to get everyone on the same page. Because data scientists only ever touch small subsets of the data, there’s no way for them to extrapolate their models to function at scale. They don’t have access to production-grade data technology, so they have no way of understanding the constraints of building complex pipelines. Meanwhile, data engineers are being handed algorithms to implement with the barest context of the business problem they’re trying to solve and with little understanding of how and why data scientists have settled on this solution. There may be some back and forth, but there’s rarely enough common ground to build a foundation.


Exploring the Gaps in Scrum Mastery

Often people assume that Scrum is just a work management approach that helps us increase efficiency by organizing our tasks. Instead, it is intended to enable people to work in focused, collaborative, autonomous teams that use empiricism, creativity, and innovation to pursue opportunities to deliver value to customers by solving complex problems. To be creative in solving challenging problems, the Scrum Team must feel safe enough to experiment, fail, and learn through empiricism. They need to view each backlog item, interaction, and piece of data as an opportunity to learn and optimize. If these things are not possible, the team will not thrive. How do we, as Scrum Masters, build an environment where this is possible? To help groups of people form into high-functioning teams, they need ownership, inspiring purpose, and self-accountability. These traits inspire curiosity and will encourage them to take responsibility for their own work, how they work as a team, and how they work with those outside of the team. How do we, as Scrum Masters, build an environment where this is possible?


Agile/Scrum is a Failure – Here’s Why

The Church of Agile is being corrupted from within by institutional forces that [can’t] adapt to the radical humanity [of] collaborative, self-organizing, cross-functional teams. … Agile wasn’t supposed to be this way. … Agile is supposed to be centered on people, not processes. … But many businesses instead prioritize controlling their commodity human resources. … Companies have dressed it up in Scrum’s clothing, claiming Agile ideology while reasserting Waterfall’s hierarchical micromanagement. … Properly implemented Scrum or Kanban [should] lead to the desired outcome within finite time and budget. … Stories as mini-Waterfalls [treat] the engineer as a cog in their employer’s machine … with no understanding of the craft, creativity, and critical thinking required to solve such complex problems. … Scrumfall relies, in other words, on the product team … providing a complete and perfect specification before development begins. And it relies on the development team … planning out a complete and perfect implementation before a single line of code is written. … The invading Waterfall taskmasters hidden in Scrum’s Trojan Horse absolutely hate uncertainty. 


All About Ecstasy, a Language Designed for the Cloud

Ecstasy’s emphasis on predictability is perhaps best illustrated via the type system, known as the Turtles Type System, because it is bootstrapped on itself. As in Smalltalk, everything in Ecstasy is an object, and all Ecstasy types are built out of other Ecstasy types. In other words, unlike in Java or C#, there is no secondary primitive type system and chars, ints, bits, and booleans are all objects. In common with Java and C# there is a single root called Object — although, In Ecstasy, Object is an interface, not a class. Technically the type system supports a long and rather intimidating-looking list of features. It is fully generic and fully reified, covariant, module-based, transitively closed, type-checked and type-safe. The majority of type safety checks are performed by the compiler and re-checked by the link-time verifier, with only those checks in which the types cannot be fully known beforehand performed at runtime — specifically to allow support for type variance. “The Ecstasy language rules automatically handle covariance and contravariance,” Purdy wrote in an email response to The New Stack.


An offensive mindset is crucial for effective cyber defense

Threat intelligence is a key component to developing an offensive mindset. That’s why proactive cybersecurity auditing can be one of the best courses of action in stopping cyberattacks before they can impact an organization. To implement the right changes to cybersecurity strategy, an organization needs to understand fully existing network vulnerabilities. This can be accomplished through a few different tactics, including penetration testing and vulnerability scanning. Penetration testing involves a person purposefully hacking into a network to identify weaknesses to an organization’s system, while vulnerability scanning consists of an automated test that looks for potential security vulnerabilities. Both tactics enable organizations to better grasp the mind of a hacker and understand the “how” behind a potential attack. Something else to be considered – under the right circumstances – is the possibility of hiring a former hacker. Their insight could prove to be extremely helpful, as aptitude in identifying weaknesses can be a useful asset. Many former hackers find roles as a penetration tester / red team member fulfills their desire to expose system flaws while doing so legally, for the betterment of security.


Why businesses need to help employees build friendships

The past few years have made this worse. At many companies, the entire staff quickly became remote, and the days of team lunches, onsite gyms, happy hours, and chats in the hallway disappeared. Suddenly, that company culture ceased to exist. Even as some people returned to the office many weeks or months later, many others did not. As companies institute remote or hybrid working environments on a permanent basis, there are fewer opportunities to build relationships with colleagues in person. The loss of work friendships is likely one reason so many people are choosing to leave their jobs, as CNBC reported. And among those who stay, success and creativity take a hit. In a recent study, Yasin Rofcanin, a professor of management at the University of Bath in the UK, and a group of colleagues found that friendship between coworkers is the most crucial element for enhancing employee performance. The isolation takes perhaps the biggest toll on mental and emotional health. Feelings of isolation are deeply intertwined with stress and anxiety. Without other people to lean on, it can be much more difficult for colleagues to find the resiliency they need to face each workday.


The three most dangerous types of internal users to be aware of

Cautious users are willing to comply with new protocol changes, but just need some time to fully adjust. They may need more gentle encouragement than the typical user, as they take more of a “wait-and-see” approach to new cyber security changes. This may be due to fear that any changes could disrupt their workflow. This can pose a serious risk as vulnerabilities are more exposed during major changes to security. ... Traditionalist users are generally hostile to change and often do not trust IT help desks, thinking that the processes for asking for help are too time consuming. Because they do not engage with understanding how these new changes will directly impact their everyday workloads, some may either wait until the last minute before integrating the new security changes, or resist altogether. ... Like traditionalists, overachievers may ignore cyber training sessions, emails from IT, or avoid learning new authentication processes – seeing these as below their skill level. However, this group of users is often overlooked when an assessment is performed, as through their own experiences, they may feel that the resources within the organisation are not adequate.



Quote for the day:

"Increasingly, management's role is not to organize work, but to direct passion and purpose." -- Greg Satell