Daily Tech Digest - June 10, 2022

Everything You Need to Know About Enterprise Architecture vs. Project Management

Even though both have their own set of specialized skills, they still correlate in certain areas. Sometimes different teams are working on various initiatives or parts of a landscape. In the middle of the project, they find out that each team needs to work on the same bit of the software or service ... However, to execute such a situation without any mishap needs some coordination and a good system in place to foresee these dependencies. Since it is hard to keep track of all the dependencies and some might come to bite you from the back later. This is where enterprise architecture is needed. Enterprise architects are usually well aware of these relationships and with their expertise in architecture models, they can uncover these dependencies better. Such dependencies are usually unknown to the project or program managers. Therefore, this is where enterprise architect vs. Project management correlates. Enterprise architecture is about managing the coherence of your business whereas project management is responsible for planning and managing usually from the financial and resource perspective.


A Minimum Viable Product Needs a Minimum Viable Architecture

In short, as the team learns more about what the product needs to be, they only build as much of the product and make as few architectural decisions as is absolutely essential to meet the needs they know about now; the product continues to be an MVP, and the architecture continues to be an MVA supporting the MVP. The reason for both of these actions is simple: teams can spend a lot of time and effort implementing features and QARs in products, only to find that customers don’t share their opinion on their value; beliefs in what is valuable are merely assumptions until they are validated by customers. This is where hypotheses and experiments are useful. In simplified terms, a hypothesis is a proposed explanation for some observation that has not yet been proven (or disproven). In the context of requirements, it is a belief that doing something will lead to something else, such as delivering feature X will lead to outcome Y. An experiment is a test that is designed to prove or reject some hypothesis.


In Search of Coding Quality

The major difference between good- and poor-quality coding is maintainability, states Kulbir Raina, Agile and DevOps leader at enterprise advisory firm Capgemini. Therefore, the best direct measurement indicator is operational expense (OPEX). “The lower the OPEX, the better the code,” he says. Other variables that can be used to differentiate code quality are scalability, readability, reusability, extensibility, refactorability, and simplicity. Code quality can also be effectively measured by identifying technical-debt (non-functional requirements) and defects (how well the code aligns to the laid specifications and functional requirements,” Raina says. “Software documentation and continuous testing provide other ways to continuously measure and improve the quality of code using faster feedback loops,” he adds. ... The impact development speed has on quality is a question that's been hotly debated for many years. “It really depends on the context in which your software is running,” Bruhmuller says. Bruhmuller says his organization constantly deploys to production, relying on testing and monitoring to ensure quality.


A chip that can classify nearly 2 billion images per second

While current, consumer-grade image classification technology on a digital chip can perform billions of computations per second, making it fast enough for most applications, more sophisticated image classification such as identifying moving objects, 3D object identification, or classification of microscopic cells in the body, are pushing the computational limits of even the most powerful technology. The current speed limit of these technologies is set by the clock-based schedule of computation steps in a computer processor, where computations occur one after another on a linear schedule. To address this limitation, Penn Engineers have created the first scalable chip that classifies and recognizes images almost instantaneously. Firooz Aflatouni, Associate Professor in Electrical and Systems Engineering, along with postdoctoral fellow Farshid Ashtiani and graduate student Alexander J. Geers, have removed the four main time-consuming culprits in the traditional computer chip: the conversion of optical to electrical signals, the need for converting the input data to binary format, a large memory module, and clock-based computations.


Scrum, Remote Teams, & Success: Five Ways to Have All Three

Agile teams have long made use of team agreements (or team working agreements). These set ground rules for the team, created by the team and enforced by the team. When our working environment shifts as much as it has recently, consider establishing some new team agreements specifically designed to address remote work. Examples? On-camera expectations, team core working hours (especially if you’re spread across multiple time zones) and setting aside focus time during which interruptions are kept to a minimum. ... One of the huge disadvantages of a remote team is the lack of personal connections that are made just grabbing a cup of coffee or standing around the water cooler. Remote teams need to be deliberate about counteracting isolation. Consider taking the first few minutes of a meeting to talk about anything non-work related. Set up a time for a team show-and-tell in which each team member can share something from their home or background in their home office that matters to them. Find excuses for the team to share anything that helps teammates get to know each other more—as human beings, not just co-workers. 


Cisco introduces innovations driving new security cloud strategy

Ushering in the next generation of zero trust, Cisco is building solutions that enable true continuous trusted access by constantly verifying user and device identity, device posture, vulnerabilities, and indicators of compromise. These intelligent checks take place in the background, leaving the user to work without security getting in the way. Cisco is introducing less intrusive methods for risk-based authentication, including the patent-pending Wi-Fi fingerprint as an effective location proxy without compromising user privacy. To evaluate risk after a user logs in, Cisco is building session trust analysis using the open Shared Signals and Events standards to share information between vendors. Cisco unveiled the first integration of this technology with a demo of Cisco Secure Access by Duo and Box. “The threat landscape today is evolving faster than ever before,” said Aaron Levie, CEO and Co-founder of Box. “We are excited to strengthen our relationship with Cisco and deliver customers with a powerful new tool that enables them to act on changes in risk dynamically and in near real-time.


10 key roles for AI success

The domain expert has in-depth knowledge of a particular industry or subject area. This person is an authority in their domain, can judge the quality of available data, and can communicate with the intended business users of an AI project to make sure it has real-world value. These subject matter experts are essential because the technical experts who develop AI systems rarely have expertise in the actual domain the system is being built to benefit, says Max Babych, CEO of software development company SpdLoad. ... When Babych’s company developed a computer-vision system to identify moving objects for autopilots as an alternative to LIDAR, they started the project without a domain expert. Although research proved the system worked, what his company didn’t know was that car brands prefer LIDAR over computer vision because of its proven reliability, and there was no chance they would buy a computer vision–based product. “The key advice I’d like to share is to think about the business model, then attract a domain expert to find out if it is a feasible way to make money in your industry — and only after that try to discuss more technical things,” he says.


Be Proactive! Shift Security Validation Left

When security testing only kicks in at the end of the SDLC, the delays caused in deployment due to uncovered critical security gaps cause rifts between DevOps and SOC teams. Security often gets pushed to the back of the line, and there's not much collaboration when introducing a new tool, or method, such as launching occasional simulated attacks against the CI/CD pipeline. Conversely, once a comprehensive continuous security validation approach is baked in the SDLC, daily invoking attack techniques emulations through the automation built-in XSPM technology identify misconfiguration early in the process, incentivizing close collaboration between DevSecOps and DevOps. With built-in inter-team collaboration across both security and software development lifecycle, working with immediate visibility on security implications, the goal alignment of both teams eliminates erstwhile strife and friction born of internal politics. Shifting extreme left with comprehensive continuous security validation enables you to begin mapping and to understand the investments made in various detection and response technologies and implementing findings to preempt attack techniques across the kill chain and protect real functional requirements.


Unlocking the ‘black box’ of education data

Technology enables education leaders to understand a child’s learning journey in a way that hasn’t been previously possible. Be this through logging the time a child spends on a certain task, recording areas that students consistently do well or poorly in, or by noting hours spent in extra-curricular programmes. Edtech allows the collection and centralisation of data on a child across their years spent in school. This data can then be used to build up a holistic picture of the student’s learning to share with everyone who supports that pupil, from teachers, parents and carers to learning support assistants. They are all able to contribute to the discussion on a pupils areas for focus and improvement. Artificial Intelligence (AI) data analytics can be a valued tool in allowing teachers to visualise and assess the most effective ways of learning in the classroom, the metacognition processes occurring, and intervene if needed to support learning. Beyond the classroom, education leaders and policy makers can aggregate data to develop strategies and policies. 


How to Retain Talent in Uncertain Circumstances

“There was confusion and uncertainty, which led to a willingness for those professionals in those organizations to listen to the opportunities we had,” Sasson says. “There was no visibility whatsoever, which created an environment where they were more open to hearing what else was out there.” In some cases a company may be planning downsizing after a merger, and they may be allowing that uncertainty to linger because they want some employees to voluntarily find new jobs, Sasson says. However, in other cases organizations may want to retain their valuable talent, particularly in this tight job market. Just because there’s a merger or acquisition doesn’t necessarily mean that everyone will make a stampede to the door. ... Sasson’s team asked the employees at Proofpoint why they weren’t interested in new opportunities. “From what we understand, the CEO at Proofpoint and the Thoma Bravo team -- they seemed to do an excellent job of communicating the value of the acquisition and limiting the jitters that would typically be felt by the rank and file,” Sasson said.



Quote for the day:

"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford

Daily Tech Digest - June 06, 2022

How to Build a Data Science Enablement Team

Data scientists may use processes and tools you’re unfamiliar with, and those processes may not initially jibe with your own. For instance, data scientists may not think twice about emailing you code via Jupyter Notebooks. Or, they might use different versions of Python to create base images, with none in synchronization with each other. Consider offering alternatives to help them improve their workflows (and make your life a bit easier). For example, help them organize what they’re working on by setting up a Jupyter Hub instance or git repository. Making their jobs easier will help build the relationship. ... Most data scientists don’t want to become software developers any more than you probably want to become a data scientist. But bringing them into the DSET isn’t about getting them to learn more about software development — it’s about helping both you and them become more cognizant of the processes you both adhere to. So, while you’re empathizing with their work patterns, get them to understand how adopting some of your processes can help them in their daily workflows.


Feds Issue Alerts for Several Medical Device Security Flaws

The FDA in its alert for healthcare providers says the RUO devices are typically used in a development stage and are not for use in diagnostic procedures. But, it adds, many laboratories may be using the devices with tests for clinical diagnostic use. The vulnerabilities are exploitable remotely and have a low attack complexity, CISA says. The Illumina vulnerabilities involve path traversal, unrestricted upload of file with dangerous type, improper access control, and cleartext transmission of sensitive information. The vulnerabilities were scored as having CVSS v3 base scores of between 7.4 and 10.0. "Successful exploitation of these vulnerabilities may allow an unauthenticated malicious actor to take control of the affected product remotely and take any action at the operating system level," CISA warns. "An attacker could impact settings, configurations, software, or data on the affected product and interact through the affected product with the connected network." "Illumina has confirmed a security vulnerability affecting software in certain Illumina desktop sequencing instruments," the company says in a statement provided to Information Security Media Group. 


Crypto FUD: Quantum Computing Will Dwarf Blockchains’ Security

According to the research carried out by the team at Sussex, they concluded that only a supercomputer with a processing power of over 317 Quantum Bits could break down the SHA-256 algorithm in an hour or two. At the moment, the IBM supercomputer boasts around 127 qubits showing that it is still far behind the ‘possible’ processing power required to start causing damage to the Bitcoin algorithms. For Bitcoin’s blockchain to be broken, the supercomputer would need to perform a 50+1 attack involving taking over the blocks’ mining process. Bitcoin mining is done using special hardware called the Application Specific Integrated Circuits (ASICs), specifically made for the mining rigs. The circuits use a programming method/ hash function known as “puzzle friendliness,” where every input is expected to provide a good output, and if it doesn’t, then it is detected by the whole system, and the miner gets notified. That means the operation of the ASICs cannot begin to be tampered with by any computer without all miners working on the same block being notified concurrently. 


8 ways level of detail could improve digital twins

The architectural, engineering, and construction industry uses a related concept called Level of Development in Building Information Modeling (BIM) to characterize changes in technical design depth across a project’s development process. It describes the level to which planning teams have fleshed out the specifications, geometry and attached information. In the early stages, planning groups may just want to quickly estimate the overall cost and complexity of a project before proceeding. Later, domain experts such as electricians, plumbers and structural engineers can plan out exact gauges of wire and pipe in richer depth. These later levels of development can help plan orders and schedule the construction sequence so that teams do not interfere with each other. ... In good experience design, it is often helpful to guide a user’s attention to a particular detail. For example, it might be more beneficial to highlight the exact screws a repair technician needs to remove rather than render a scene in complete detail using an augmented reality overlay. Researchers believe that using LOD for glanceable interfaces could clarify complicated repairs and procedures. In musical concerts, visual augmentation with LOD could enhance the audience experience.


Considering digital trust: why zero trust needs a rethink

Knowing that digital trust is now critical for all businesses and organisations today; why has zero trust gained so much attention? Well, simply put, we can’t assume that we should trust everything, take a zero trust approach, then establish and maintain trust. From a security leader and CISO perspective, that means that we need to establish and maintain trust with all entities that make up and interact with the business. As such, digital trust here is the trust in machines, software, devices, and humans interacting with digital services that now power our world. It should not be confused with zero trust, which is often misinterpreted. The ‘zero’ implies no trust at all exists. Trust is dynamic, and it needs to be constantly upheld. The way enterprises approach establishing digital trust is important to ensure the functioning of the business, but specifically the security of both human and machine identities. While many organisations focused on zero trust initiatives over the past few years, many recognised that trust in humans and machines is the foundational layer. In the modern enterprise, security leaders must design solid identity-first security frameworks deeply rooted in cryptography for digital trust to be established.


Connected Healthcare Takes Huge Leap Forward

Business and IT leaders who ignore connected healthcare do so at their own peril. A study from Doctor.com found that 83% of patients using telemedicine plan to continue with it after the pandemic. In addition, 68% prefer to use their mobile phone to make appointments and handle other tasks, and 91% say that connected tech is valuable for managing prescriptions and compliance. At some point -- and there’s some indication that it’s already happening -- consumer companies like Apple, Withings, ÅŒura and Fitbit will steal away opportunities for new products and services. Already, drug store chains and smaller and more disruptive companies are establishing footholds, and new and innovative healthcare products are appearing. “There are growing opportunities for data and app-related services, apps, subscriptions and more but traditional healthcare providers often don’t see this,” Schooley points out. Establishing an IT foundation to support connected health is vital. Hall says this includes a cloud-first architecture, integrating IoT and edge technologies, focusing on data standards, building more sophisticated and interactive apps, exploring partnerships, and cultivating skillsets needed to support both innovation and operations.


The costs and damages of DNS attacks

A DNS attack does not just result in an inconvenient business disruption but can be a costly expense for organizations. In the past 12 months, APAC has become the region with the highest average cost of a successful attack at $1,036,040, an increase of 14% when compared to 2021, while EMEA and North America’s average cost of successful attack has decreased by 4% and 7% respectively. Malaysia (21%), Germany (18%) and both India and the UK (14% each) experienced the highest increase in the cost of an attack, while Spain saw its cost of damages plummet by almost half (48%) when compared to 2021. France and the US were the only other countries that saw a decline in the average cost with 21% and 5% respectively. Cybercriminals are continuing to use all available tools to gain access to networks, disrupt the business and steal data by specifically targeting the hybrid workforce, with DNS-based attacks becoming increasingly pervasive across all industries. In the last year, 70% of organizations suffered with in-house and cloud application downtime, with the average time to mitigate these threats increasing to 6 hours and 7 minutes, meaning that employees, partners, and customers were unable to access any services.


Government Agencies Seize Domains Used to Sell Credentials

"The actions executed by our international partners included the arrest of a main subject, searches of several locations, and seizures of the web server's infrastructure," according to the DOJ. In December 2020, Britain's National Crime Agency reported arrests of 21 individuals on suspicion of purchasing personally identifiable information from the WeLeakInfo website for a variety of purposes, including the buying and selling of malicious cyber tools such as remote access Trojans, aka RATs, as well as to buy "cryptors," which can be used to obfuscate code in malware, according to the NCA. It has said that all are men, ranging in age from 18 to 38 and the arrests took place over a five-week period starting in November 2020. Beyond the 21 people arrested by police, another 69 individuals in England, Wales and Northern Ireland have received warnings from the NCA or other domestic law enforcement agencies, saying they may have engaged in criminal activity tied to the investigation. Sixty of those individuals also received cease-and-desist orders from police.


The Value of Data Mobility for Modern Enterprises

Despite all the excitement about data analytics, it’s not a silver bullet. Turning data into real business value isn’t simply a matter of deploying all the right tools. To be sure, it requires some smart investment in good technology, but ultimately, it’s got to be about identifying high-value business cases and making sure that your business users have what they need to deliver positive outcomes. Business success is virtually always about compromise. For years, CTOs have grappled with the pros and cons of unified systems versus best-of-breed environments. They have weighed the advantages of diverse, purpose-built systems against the inherent value of a large-scale monolithic platform that offers a holistic approach to the business. In the end, best-of-breed won that battle. As a result, the problem of data silos became more pronounced. The hunger for real-time analytics has rendered the pain caused by data silos far more palpable. But there is good news; if we make the data from all those different systems available in a single place, we can have the best of both worlds.


Digital transformation: How to gain organizational buy-in

Data analytics does not always require data scientists. CIOs and IT leaders often reach a turning point when they discover that most employees can be trained to become resident data analytics subject experts. When employees combine new knowledge of data analysis with their existing knowledge of the processes or machines, they can quickly be at the forefront of a digital journey. This is welcome news to most IT leaders, simply because the demand for skillsets in data science and cybersecurity has skyrocketed. Upskilling existing team members can be critical in attaining sustained adoption and continuous improvements of digital solutions. This includes long-term improvements in employee engagement and retention, increased cross-functional collaboration, and adoption of modern technology trends. Along with their technical skills, employees need to be skilled at diagnostics and problem-solving using the data now readily available to them. Employees who may have previously been data-gatherers can shift to become problem-solvers based on new data-driven insights. Make sure your employees are ready to learn and grow to take advantage of these opportunities.



Quote for the day:

"The essence of leadership is the willingness to make the tough decisions. Prepared to be lonely." -- Colin Powell

Daily Tech Digest - June 05, 2022

How the Web3 stack will automate the enterprise

Web3 is only partially in existence within enterprises but is already making an incredible impact and altering strategies. Cross River Bank, which just raised $620 million at a $3 billion valuation, powers embedded payments, cards, lending, and crypto solutions for over 80 leading technology partners. Cross River CEO Giles Gade’s plan is to start offering more crypto-related products and services, gearing towards a crypto-first strategy. Investors are excited by the opportunity. “As Web3 continues to gain mindshare of consumers and businesses alike, we believe Cross River sits in a unique position to serve as the infrastructure and interconnective tissue between the traditional and regulated centralized financial system, as it transitions slowly to a decentralized one,” said Lior Prosor, General Partner and Co-founder of Hanaco Ventures in the Cross River press release. In many ways, this time is no different than when financial institutions and VCs saw the disruptive potential by investing in FinTech innovation – analog to digital – years prior. If FinTech is the blending of technology and finance, Web3 is the merging of crypto with the web.

Demystifying the Metrics Store and Semantic Layer

First, many critical data assets end up isolated on local servers, data centers and cloud services. Unifying them poses a significant challenge. Often, there are also no standardized data and business definitions, and this adds to the difficulty for businesses to tap into the full value of their data. As companies embark on new data management projects, they need to address these concerns; however, many have chosen to avoid this issue for one reason or another. This results in new data silos across the business. Second, as every data warehouse practitioner is aware, it’s difficult for most business users to interpret the data in the warehouse. Because technical metadata like table names, column names and data types are typically worthless to business users, data warehouses aren’t enough when it comes to allowing users to conduct analysis on their own. From a business user’s perspective, what can be done to solve this problem? Two popular solutions are metrics stores and semantic layers, but which is the best approach? And what’s the difference between them?


Why HR plays an important role in preventing cyber attacks

HR staff members often work with legal counsel on security policies, including the creation, maintenance and enforcement of acceptable usage policies. Since HR staff communicates frequently with employees, they are well positioned to share information about security and privacy expectations and often already work to keep security topics top-of-mind for employees. ... As with security policy work, HR professionals are often a valuable part of compliance-related initiatives because certain aspects of state, federal and international privacy and security compliance regulations require HR expertise. This is particularly true for larger organizations that have office locations or employees in multiple countries. HR may work on the creation of processes including user onboarding and offboarding, security awareness and training, and the steps for incident response once a crisis occurs. ... Some HR professionals already serve on their IT and security governance committee, as it's only natural that HR should help get the word out on security and assist with policy creation and administration when needed.


7 Reasons Why Serverless Encourages Useful Engineering Practices

They are easier to change. After reading the book “The Pragmatic Programmer”, I realized that making your software easy to change is THE de-facto principle to live by as an IT professional. For instance, when you leverage functional programming with pure (ideally idempotent) functions, you always know what to expect as input and output. Thus, modifying your code is simple. If written properly, serverless functions encourage code that is easy to change and stateless. They are easier to deploy — if the changes you made to an individual service don’t affect other components, redeploying a single function or container should not disrupt other parts of your architecture. This is one reason why many decide to split their Git repositories from a “monorepo” to one repository per service. With serverless, you are literally forced to make your components small. For instance, you cannot run any long-running processes with AWS Lambda (at least for now). At the time of writing, the maximum timeout configuration doesn’t allow for any process that takes longer than 15 minutes. 



WTF is a Service Mesh?

The internal workings of a Service Mesh are conceptually fairly simple: every microservice is accompanied by its own local HTTP proxy. These proxies perform all the advanced functions that define a Service Mesh (think about the kind of features offered by a reverse proxy or API Gateway). However, with a Service Mesh this is distributed between the microservices—in their individual proxies—rather than being centralised. In a Kubernetes environment these proxies can be automatically injected into Pods, and can transparently intercept all of the microservices’ traffic; no changes to the applications or their Deployment YAMLs (in the Kubernetes sense of the term) are needed. These proxies, running alongside the application code, are called sidecars. These proxies form the data plane of the Service Mesh, the layer through which the data—the HTTP requests and responses—flow. This is only half of the puzzle though: for these proxies to do what we want they all need complex and individual configuration. Hence a Service Mesh has a second part, a control plane.


Best Practices for Deploying Language Models

We’re recommending several key principles to help providers of large language models (LLMs) mitigate the risks of this technology in order to achieve its full promise to augment human capabilities. While these principles were developed specifically based on our experience with providing LLMs through an API, we hope they will be useful regardless of release strategy (such as open-sourcing or use within a company). We expect these recommendations to change significantly over time because the commercial uses of LLMs and accompanying safety considerations are new and evolving. We are actively learning about and addressing LLM limitations and avenues for misuse, and will update these principles and practices in collaboration with the broader community over time. We’re sharing these principles in hopes that other LLM providers may learn from and adopt them, and to advance public discussion on LLM development and deployment.


A cybersecurity expert explains why it would be so hard to obscure phone data in a post-Roe world

There’s not a whole lot users can do to protect themselves. Communications metadata and device telemetry – information from the phone sensors – are used to send, deliver and display content. Not including them is usually not possible. And unlike the search terms or map locations you consciously provide, metadata and telemetry are sent without you even seeing it. Providing consent isn’t plausible. There’s too much of this data, and it’s too complicated to decide each case. Each application you use – video, chat, web surfing, email – uses metadata and telemetry differently. Providing truly informed consent that you know what information you’re providing and for what use is effectively impossible. If you use your mobile phone for anything other than a paperweight, your visit to the cannabis dispensary and your personality – how extroverted you are or whether you’re likely to be on the outs with family since the 2016 election – can be learned from metadata and telemetry and shared.


Three Architectures That Could Power The Robotic Age With Autonomous Machine Computing

Similar to other information technology stacks, the autonomous machine computing technology stack consists of hardware, systems software and application software. Sitting in the middle of this technology stack is computer architecture, which defines the core abstraction between hardware and software. The existence of this abstraction layer allows software developers to focus on optimizing the software to fully utilize the underlying hardware to develop better applications as well as to achieve higher performance and higher energy efficiency. This abstraction layer also allows hardware developers to focus on developing faster, more affordable, more energy-efficient hardware that can unlock the imagination of software developers. ... Hence, computer architecture is essential to information technology. For instance, in the personal computing era, x86 has become the dominant computer architecture due to its superior performance. In the mobile computing era, ARM has become the dominant computer architecture due to its superior energy efficiency. 


Datadog finds serverless computing is going mainstream

Serverless represents the ideal state of cloud computing, where you only use exactly what resources you need and no more. That’s because the cloud provider delivers only those resources when a specific event happens and shuts it down when the event is over. It’s not a lack of servers, so much as not having to deploy the servers because the provider handles that for you in an automated fashion. When people began talking about cloud computing around 2008, one of the advantages was elastic computing, or only using what you need, scaling up or down as necessary. In reality, developers don’t know what they’ll need, so they’ll often overprovision to make sure the application stays up and running. The company created the report based on data running through its monitoring service. While it represents only the activity from its customers, Rabinovitch sees it as quality data given the broad range of customers it has using its services. “We do think we’re well represented across the industry, and we believe that we’re representative of real production workloads,” he said.


How Platform Engineering Helps Manage Innovation Responsibly

Platform engineering, then, is a support function. If it enables, it does so by reducing complexity and making it easier for developers and other technical teams to achieve their objectives. Moreover, one of the advantages of having a platform engineering team is that it can balance competing needs and aims — like, for example, developer experience and security — in a way that ensures engineering capabilities and commercial imperatives are properly aligned. Calling it a “support function” might not sound particularly sexy, but it nevertheless suggests that organizations are maturing in their approach to software development. It’s no longer the locus of moving fast and breaking things, but instead recognized as something that requires care and stewardship. But this implies responsibility — and that, to invert the old adage, carries considerable power. This means that platform engineering can become a political beast within organizations. If it can shape the way developers work, it can inevitably play a part in the direction of a whole technology strategy.



Quote for the day:

"Leadership is developed daily, not in a day." -- John C. Maxwell

Daily Tech Digest - June 02, 2022

A decentralized verification system could be the key to boosting digital security

Instead of placing trust in a single central entity, decentralization places trust in the network as a whole, and this network can exist outside of the IAM system using it. The mathematical structure of the algorithms underpinning the decentralized authority ensures that no single node can act alone. Moreover, each node on the network can be operated by an independently operating organization, such as a bank, telecommunication company, or government departments. So, stealing a single secret would require hacking several independent nodes. Even in the event of an IAM system breach, the attacker would only gain access to some user data – not the entire system. And to award themselves authority over the entire organization, they would need to breach a combination of 14 independently operating nodes. This isn’t impossible, but it’s a lot harder. But beautiful mathematics and verified algorithms still aren’t enough to make a usable system. There’s more work to be done before we can take decentralized authority from a concept to a functioning network that will keep our accounts safe.


Emerging digital twins standards promote interoperability

Digital twins today are mostly application-driven. “But what we really need is the interoperable digital twin so we can realize the interoperability between these different digital twins,” said Christian Mosch, general manager at IDTA. The IDTA Asset Administration Shell standard provides a framework for sharing data across the different lifecycle phases such as planning, development, construction, commissioning, operation and recycling at the end of life. It provides a way of thinking about assets such as a robot arm and the administration of the different data and documents that describe it across various lifecycle phases. The shell provides a container for consistently storing different types of information and documentation. For example, the robot arm might include engineering data such as 3D geometry drawings, design properties and simulation results. It may also include documentation such as declarations of conformity and proof certifications. The Asset Administration Shell also brings data from operations technology used to manage equipment on the shop floor into the IT realm to represent data across the lifecycle. 


4 Database Access Control Methods to Automate

The beauty of using security automation as a data broker is that it has the ability to validate data-retrieval requests. This includes verifying that the requestor actually has permission to see the data being requested. If the proper permissions aren’t in place, the user can submit a request to be added to a specific role through the normal request channels, which is typically the way to go. With automated data access control, this request could be generated and sent within the solution to streamline the process. This also allows additional context-specific information to be included in the data-access request automatically. For example, if someone requests data that they do not have access to within their role, the solution can be configured to look up the database owner, populate an access request and send it to the owner of the data, who can then approve one-time access or grant access for a certain period of time. A common scenario where this is useful is when an employee goes on vacation and someone new is helping with their clients’ needs while they are out.


AI still needs humans to stay intelligent—here’s why

Remember, AI models are usually programmes or algorithms built to use data to recognise patterns, and either reach a conclusion or make a prediction. Once designed, paid for, and implemented, it’s easy to assume that these models will stay smart forever. Instead, they nearly always require regular human intervention. Why? Let’s look at a few examples: It’s likely that the technology your organisation uses in day-to-day operations is regularly changed and upgraded; Your company might have uncovered new intelligence about your customers, such as levels of interaction with a recently launched product; Your business’ strategies may change – for example, you might switch focus from reducing production costs to investing in a quality customer experience.  ... Where possible, avoid ‘technical debt’ by focusing on gradual AI improvements, rather than waiting for an issue to flare up and then facing a gruelling system overhaul. And finally, strive to create an AI-aware culture in your workplace. Educate your employees on how your AI systems work, why they’re reliable, why they’re to be trusted rather than feared – and that they’re not a replacement for their jobs.


Massive shadow code risk for world’s largest businesses

“While retail and credit card breaches grab the most headlines, this is a pervasive and relatively unchecked risk to both security and privacy across all verticals,” said Dan Dinnar, CEO of Source Defense. “It’s also a fast-growing and extremely volatile issue with regard to sensitive data. Organizations and their digital supply chain partners are constantly updating sites and code, and the data of greatest value to malicious actors is collected on the pages where the business has the greatest need for analytics, tag management, and other tracking and management capabilities.” Extensive libraries of third-party scripts are available free, or at low cost, from a range of communities, organizations, and even individuals, and are extremely popular as they allow development teams to quickly add advanced functionality to applications without the burden of creating and maintaining them. These packages also often contain code from additional parties further removed from – and farther out of the purview of – the deploying organization.


High-tech legislation through self-regulation

In industries where no direct legislation exists, judges have to rely on a multitude of secondary factors, putting additional strain on them. In some cases, they might be left only with the general principles of law. In web scraping, data protection laws, e.g. GDPR, became the go-to area for related cases. Many of them have been decided on the basis of these regulations and rightfully so. But scraping is much more than just data protection. Case law, mostly from the US, has in turn been used as one of the fundamental parts that have directed the way for our current understanding of the legal intricacies of web scraping. Although, regretfully, that direction isn’t set in stone. Yet, using such indirect laws and practices to regulate an industry, even with the best intentions, can lead to unsatisfying outcomes. A majority of the publicly accessible data is being held by specific companies, particularly social media websites. Social media companies and other data giants will do everything in their power to protect the data they hold. Unfortunately, they might sometimes go too far when protecting personal data.


Why AI Ethics Is Even More Important Now

AI ethics stems from a company's values. Those values should be reflected in the company's culture as well as how the company utilizes AI. One cannot assume that technologists can just build or implement something on their own that will necessarily result in the desired outcome(s). "You cannot create a technological solution that will prevent unethical use and only enable the ethical use," said Forrester's Carlsson. "What you need actually is leadership. You need people to be making those calls about what the organization will and won't be doing and be willing to stand behind those, and adjust those as information comes in." Translating values into AI implementations that align with those values requires an understanding of AI, the use cases, who or what could potentially benefit and who or what could be potentially harmed. "Most of the unethical use that I encounter is done unintentionally," said Forrester's Carlsson. " Of the use cases where it wasn't done unintentionally, usually they knew they were doing something ethically dubious and they chose to overlook it." Part of the problem is that risk management professionals and technology professionals are not yet working together enough.


Digital transformation: 5 ways to create a realistic strategy

Understand that digital transformation doesn’t just happen in the IT department; it happens in the C-suite, in cubicles, and in home offices. That means all stakeholders need to be aligned and in agreement with your company’s digital transformation goal. The directive must come from management, but the work will happen throughout the company, often precipitating a major cultural shift toward new technologies and processes. In such cases, training and change management might be necessary to make users feel more comfortable with the new tools and processes. Leaders need to ensure that their teams are on board with the direction the company is moving in, and they should be willing to listen to feedback as the organization continues along its journey. What that plan looks like is up to you. Digital transformation is different for everyone, and every company has its own objectives. Meeting those objectives can be daunting. But by setting a goal, performing an assessment, breaking your plan into manageable pieces, budgeting realistically, and getting everyone to buy in, you will succeed.


Three ways to prevent hybrid work from breaking your company culture

Companies need to take a hard look at the current environment and gauge how effectively it supports different types of work. Many aspects of office design are based on convention rather than deliberate thought. One analysis found that building thermostats typically have been calibrated for the comfort of men who are 40 years old and weigh approximately 154 pounds, which is cooler than is comfortable for most women. That norm was established decades ago and never updated. Just about every physical feature of the office can be made more conducive to hybrid work. Technology such as an online whiteboard for meetings, smart cameras that automatically pan to people as they talk, and virtual receptionists help to bridge the gap between virtual and in-office workforces. ... Last, leaders must set employees up for success. These support mechanisms can be quite diverse. The insurance company mentioned above, for instance, created training programs to give its employees the right skills to succeed in a hybrid workplace. These included tactical help on new technology, along with training for managers on effective virtual coaching conversations.


Why the Dual Operating Model Impedes Enterprise Agility

In the traditional organization, waiting for things (or queueing) is the norm: waiting for people to respond to emails, waiting days or weeks for a meeting because that’s the first open time on everyone’s calendar, or waiting for someone else to finish their part of a project so you can start yours. But waiting is death for agile teams; it wastes valuable time and diverts their focus. And when I say "death", I am not exaggerating for effect. Waiting makes agile teams ineffective, and over time it will kill the agile team’s ability to get things done. If an agile team has to wait every time it needs something from the rest of the organization, pretty soon it will act just like any other team. This is one reason why agile teams only seem to work on new initiatives that are completely disconnected from the existing organization: so long as they don’t have to interact with the rest of the organization, so long as they are completely self-contained, they don’t waste time waiting and they can work in an agile way. But once they need expertise or authority they don’t have, it all starts to fall apart.



Quote for the day:

"Being defeated is often a temporary condition. Giving up is what makes it permanent." -- Marilyn Vos Savant

Daily Tech Digest - May 26, 2022

4 Reasons to Shift Left and Add Security Earlier in the SDLC

Collaboration is critical for the security and development teams, especially when timelines have to change. The security operations center (SOC) team may need to train on cloud technologies and capabilities, while the cloud team may need help understanding how the organization performs risk management. Understanding the roles and responsibilities of these teams and the security functions each fulfill is critical to managing security risks. In some scenarios, security teams can act as enablers for cloud engineering, teaching teams how to be self-sufficient in performing threat-modeling exercises. In other situations, security teams can act as escalation paths during security incidents. Last, security teams can also own and operate underlying platforms or libraries that provide contextual value to more stream-oriented cloud engineering teams, such as IAC scanning capabilities, shared libraries for authentication and monitoring, and support of workloads constructs, such as secure service meshes.


We have bigger targets than beating Oracle, say open source DB pioneers

The pitching of open source against Oracle's own proprietary database has shifted as the market has moved on and developers lead a database strategy building a wide range of applications in the cloud, rather than a narrower set of business applications. Zaitsev pointed out that if you look at the rankings on DB-Engines, which combines mentions, job ads and social media data, Oracle is always the top RDBMS. But a Stack Overflow survey would not even in put Oracle in the top five. So as developers are concerned, the debate about whether Oracle is the enemy is over. "The reality is, the majority of developers — especially good developers — prefer open source," he said. ... "There's a lot of companies now who are basically saying, 'Forget the Oracle API, I want to standardise on the PostgreSQL API.' They don't even want a non-PostgreSQL API because they see it is a growing market and opportunity with additional cost savings, flexibility, and continual innovation," he said, also speaking at Percona Live. "Years ago, if you had to rewrite your application from Oracle to PostgreSQL, that was a negative, that was a cost to you. ..."


Ultrafast Computers Are Coming: Laser Bursts Drive Fastest-Ever Logic Gates

The researchers’ advances have opened the door to information processing at the petahertz limit, where one quadrillion computational operations can be processed per second. That is almost a million times faster than today’s computers operating with gigahertz clock rates, where 1 petahertz is 1 million gigahertz. “This is a great example of how fundamental science can lead to new technologies,” says Ignacio Franco, an associate professor of chemistry and physics at Rochester who, in collaboration with doctoral student Antonio José Garzón-Ramírez ’21 (PhD), performed the theoretical studies that lead to this discovery. ... The ultrashort laser pulse sets in motion, or “excites,” the electrons in graphene and, importantly, sends them in a particular direction—thus generating a net electrical current. Laser pulses can produce electricity far faster than any traditional method—and do so in the absence of applied voltage. Further, the direction and magnitude of the current can be controlled simply by varying the shape of the laser pulse (that is, by changing its phase).


A computer cooling breakthrough uses a common material to boost power 740 percent

Researchers at the University of Illinois at Urbana-Champaign (UIUC) and the University of California, Berkeley (UC Berkeley) have recently devised an invention that could cool down electronics more efficiently than other alternative solutions and enable a 740 percent increase in power per unit, according to a press release by the institutions published Thursday. Tarek Gebrael, the lead author of the new research and a UIUC Ph.D. student in mechanical engineering, explained that current cooling solutions have three specific problems. "First, they can be expensive and difficult to scale up," he said. He brought up the example of heat spreaders made of diamonds which are obviously very expensive. Second, he described how conventional heat spreading approaches generally place the heat spreader and a heat sin (a device for dissipating heat efficiently) on top of the electronic device. Unfortunately, "in many cases, most of the heat is generated underneath the electronic device," meaning that the cooling mechanism isn't where it is needed most.


Tech firms are making computer chips with human cells – is it ethical?

Cortical Labs believes its hybrid chips could be the key to the kinds of complex reasoning that today’s computers and AI cannot produce. Another start-up making computers from lab-grown neurons, Koniku, believes their technology will revolutionise several industries including agriculture, healthcare, military technology and airport security. Other types of organic computers are also in the early stages of development. While silicon computers transformed society, they are still outmatched by the brains of most animals. For example, a cat’s brain contains 1,000 times more data storage than an average iPad and can use this information a million times faster. The human brain, with its trillion neural connections, is capable of making 15 quintillion operations per second. This can only be matched today by massive supercomputers using vast amounts of energy. The human brain only uses about 20 watts of energy, or about the same as it takes to power a lightbulb. It would take 34 coal-powered plants generating 500 megawatts per hour to store the same amount of data contained in one human brain in modern data storage centres.


SolarWinds: Here's how we're building everything around this new cybersecurity strategy

Now, SolarWinds uses a system of parallel builds, where the location keeps changing, even after the project has been completed and shipped. Much of this access is only provided on a need-to-know basis. That means if an attacker was ever able to breach the network, there's a smaller window to poison the code with a malicious build. "What we're really trying to achieve from a security standpoint is to reduce the threat window, providing the least amount of time possible for a threat actor to inject malware into our code," said Ramakrishna. But changing the process of how code is developed, updated and shipped isn't going to help prevent cyberattacks alone, which is why SolarWinds is now investing heavily in many other areas of cybersecurity. These areas include the likes of user training and actively looking for potential vulnerabilities in networks. Part of this involved building up a red team, cybersecurity personnel who have the job of testing network defences and finding potential flaws or holes that could be abused by attackers – crucially before the attackers find them.


How to stop your staff ignoring cybersecurity advice

While regular reminders are great, if you deliver the same message repeatedly, there is a danger that staff will zone out and ultimately become disengaged with the process. We’ve seen clear evidence of this over the past year, with awareness of key phrases falling, sometimes significantly. In this year’s State of the Phish Report, just over half (53%) of users could correctly define phishing, down from 63% the previous year. Recognition also fell across common terms like malware (down 2%) and smishing (down 8%). Ransomware(opens in new tab) was the only term to see an increase in understanding, yet only 36% could correctly define the term. ... Cybersecurity training may not sound like most people’s idea of fun, but there are plenty of ways to keep it positive and even enjoyable. Deliver training in short sharp models, and don’t be afraid to use different approaches such as animation or humor if it fits well into your company culture. Making security training competitive and turning it into a game can also aid the process. The gamification of training modules has been shown to increase engagement and motivation, as well as improving attainment scores in testing.


Why are current cybersecurity incident response efforts failing?

A risk-based approach to incident response enables enterprises to prioritize vulnerabilities and incidents based on the level of risk they pose to an organization. The simplest way of framing risk is a calculation on frequency of occurrence and severity. Malware frequently reaches endpoints, and response and clean-up can cost thousands of dollars (both directly and in lost productivity). Furthermore – and security teams all over the world would agree on this – vulnerabilities on internet-facing systems must be prioritized and remediated first. Those systems are continuously under attack, and as the rate of occurrence starts to approach infinity, so does risk. Similarly, there have been many threat groups that have costed enterprises millions directly, and in some cases tens of millions in lost operations and ERP system downtime. Large enterprises measure the cost of simple maintenance windows in ERP systems in tens of millions. Thus, it’s difficult to imagine the substantial calculations on a business-critical application breach. As severity increases to that order of magnitude, so does risk.


3 Must-Have Modernization Competencies for Application Teams

To decide the best path forward, leverage Competency #1. Architects and decision-makers should begin with automated architectural assessment tools to assess the technical debt of their monolithic applications, accurately identify the source of that debt, and measure its negative impact on innovation. These insights will help teams early in the cloud journey to determine the best strategy moving forward. Using AI-based modernization solutions, architects can exercise Competency #2 and automatically transform complex monolithic applications into microservices — using both deep domain-driven observability via a passive JVM agent and sophisticated static analysis‚ by analyzing flows, classes, usage, memory and resources to detect and unearth critical business domain functions buried within a monolith. Whether your application is still on-premises or you have already lifted and shifted to the cloud (Competency #3), the world’s most innovative organizations are applying vFunction on their complex “megaliths” to untangle complex, hidden and dense dependencies for business-critical applications that often total over 10 million lines of code and consist of thousands of classes.


The surprising upside to provocative conversations at work

To be sure, supporting and encouraging sensitive conversations isn’t easy. However, leaders can create the right conditions by establishing norms, offering resources, and helping ensure that these conversations happen in safe environments, with ground rules about avoiding judgment or trying to persuade people to change their minds. Critically, employees should always have the option to just show up and listen to better understand how colleagues are impacted by something happening in the world. The objective of these conversations should definitely not be to reach solutions or generate consensus. In that way, fostering these conversations is a growth opportunity for senior executives as well, who are often much more comfortable in problem-solving mode. The leader’s role here is to help the company bring meaning, humanity, and social impact to the workforce—not to deliver answers. The main takeaway for senior leaders is that you can’t isolate employees from the issues of the world. You can, however, help them sort through those issues and create a more welcoming, inclusive environment in which people are free to be their authentic selves—and maybe even learn from their colleagues.



Quote for the day:

"Cream always rises to the top...so do good leaders" -- John Paul Warren

Daily Tech Digest - May 25, 2022

Into the Metaverse: How Digital Twins Can Change the Business Landscape

With hybrid work becoming the norm, the mapping technology to build and manage workplace digital twins could also make it easier for startups to enter the market. New businesses that would otherwise need to invest in corporate real estate can achieve virtual flexibility at a lower cost. Because real-time mapping affords visualization of indoor assets, managers of airports or hospitals, for instance, can view multiple floors, entrances, stairwells and rooms to watch what's happening and where. We will likely see crossover in how this in-the-moment tracking of equipment and resources plays out in the metaverse and in the real world. ... While the metaverse will likely represent an avenue of escape and entertainment for many, there's the potential for it to be a valuable business tool with the capability to offer real-world simulations. It's something one consultant has been doing on such a scale as to mimic the effects of global warming and show how it will disrupt businesses and entire cities. Experiencing one's own replicated neighborhood relative to rising seas, encroaching storms and more, offers a visceral, relatable experience more likely to motivate action.


Infra-as-Data vs. Infra-as-Code: What’s the Difference?

On a high level, Infrastructure-as-Data tools like VMware’s Idem and Ansible, and Infrastructure-as-Code, dominated by Terraform, were created to help DevOps teams achieve their goals of simplifying and automating application deployments across multicloud and different environments, while helping to reduce manual configurations and processes. ... When cloud architectures need to be expressed using code, “you’re just writing more and more and more and more Terraform,” he said. “Idem is different from how you generally think of Infrastructure as Code — everything boils down to these predictable datasets.” “Instead of sitting down and saying, ‘I’m going to write out a cloud in Terraform,’ you can point Idem towards your cloud, and it will automatically generate all of the data and all of the code and the runtimes to enforce it in its current state.” At the same time, Idem, as well as Ansible to a certain extent, were designed to make cloud provisioning more automated and simple to manage.


How to develop competency in cyber threat intelligence capabilities

It is necessary to understand operating systems and networks principles at all levels: File storage, access management, log files policies, security policies, protocols used to share information between computers, et cetera. The core concepts, components and conventions associated with cyberdefense and cybersecurity should be identified, and a strong knowledge of industry best practices and frameworks is mandatory. Another core tenet is how defensive approaches and technology align to at least one of the five cyber defense phases: Identify, protect, detect, respond and recover. Key concepts to know here are identity and access management and control, network segmentation, cryptography use cases, firewalls, endpoint detection and response. signature and behavior based detections, threat hunting and incident response, and red and purple teams. One should develop a business continuity plan, disaster recovery plan and incident response plan. ... This part is all about understanding the role and responsibilities of everyone involved: Reverse engineers, security operation center analysts, security architects, IT support and helpdesk members, red/blue/purple teams, chief privacy officers and more.


Build collaborative apps with Microsoft Teams

Teams Toolkit for Visual Studio, Visual Studio Code, and command-line interface (CLI) are tools for building Teams and Microsoft 365 apps, fast. Whether you’re new to Teams platform or a seasoned developer, Teams Toolkit is the best way to create, build, debug, test, and deploy apps. Today we are excited to announce the Teams Toolkit for Visual Studio Code and CLI is now generally available (GA). Developers can start with scenario-based code scaffolds for notification and command-and-response bots, automate upgrades to the latest Teams SDK version, and debug apps directly to Outlook and Office. ... Microsoft 365 App Compliance Program is designed to evaluate and showcase the trustworthiness of application-based industry standards, such as SOC 2, PCI DSS, and ISO 27001 for security, privacy, and data handling practices. We are announcing the preview of the App Compliance Automation Tool for Microsoft 365 for applications built on Azure to help them accelerate the compliance journey of their apps.


How API gateways complement ESBs

In the modern IT landscape, service development has moved toward an API-first and spec-first approach. IT environments are also becoming increasingly distributed. After all, organizations are no longer on-premises or even cloud-only, but working with hybrid cloud and multicloud environments. And their teams are physically distributed, too. Therefore, points of integration must be able to span various types of environments. The move toward microservices is fundamentally at odds with the traditional, monolithic ESB. By breaking down the ESB monolith into multiple focused services, you can retain many of the ESB’s advantages while increasing flexibility and agility. ... As API standards have matured, the API gateway can be leaner than an ESB, focused specifically on cross-cutting concerns. Additionally, the API gateway is focused primarily on client-service communication, rather than on all service-to-service communication. This specificity of scope allows API gateways to avoid scope creep, keeping them from becoming yet another monolith that needs to be broken down. When selecting an API gateway, it is important to find a product with a clear identity rather than an extensive feature set.


Artificial intelligence is breaking patent law

Inventions generated by AI challenge the patent system in a new way because the issue is about ‘who’ did the inventing, rather than ‘what’ was invented. The first and most pressing question that patent registration offices have faced with such inventions has been whether the inventor has to be human. If not, one fear is that AIs might soon be so prolific that their inventions could overwhelm the patent system with applications. Another challenge is even more fundamental. An ‘inventive step’ occurs when an invention is deemed ‘non-obvious’ to a ‘person skilled in the art’. This notional person has the average level of skill and general knowledge of an ordinary expert in the relevant technical field. If a patent examiner concludes that the invention would not have been obvious to this hypothetical person, the invention is a step closer to being patented. But if AIs become more knowledgeable and skilled than all people in a field, it is unclear how a human patent examiner could assess whether an AI’s invention was obvious. An AI system built to review all information published about an area of technology before it invents would possess a much larger body of knowledge than any human could.


SIM-based Authentication Aims to Transform Device Binding Security to End Phishing

The SIM card has a lot going for it. SIM cards use the same highly secure, cryptographic microchip technology that is built into every credit card. It's difficult to clone or tamper with, and there is a SIM card in every mobile phone – so every one of your users already has this hardware in their pocket. The combination of the mobile phone number with its associated SIM card identity (the IMSI) is a combination that's difficult to phish as it's a silent authentication check. The user experience is superior too. Mobile networks routinely perform silent checks that a user's SIM card matches their phone number in order to let them send messages, make calls, and use data – ensuring real-time authentication without requiring a login. Until recently, it wasn't possible for businesses to program the authentication infrastructure of a mobile network into an app as easily as any other code. tru.ID makes network authentication available to everyone. ... Moreover, with no extra input from the user, there's no attack vector for malicious actors: SIM-based authentication is invisible, so there's no credentials or codes to steal, intercept or misuse.


How to Manage Metadata in a Highly Scalable System

The realization that current data architectures can no longer support the needs of modern businesses is driving the need for new data engines designed from scratch to keep up with metadata growth. But as developers begin to look under the hood of the data engine, they are faced with the challenge of enabling greater scale without the usual impact of compromising storage performance, agility and cost-effectiveness. This calls for a new architecture to underpin a new generation of data engines that can effectively handle the tsunami of metadata and still make sure that applications can have fast access to metadata. Next-generation data engines could be a key enabler of emerging use cases characterized by data-intensive workloads that require unprecedented levels of scale and performance. For example, implementing an appropriate data infrastructure to store and manage IoT data is critical for the success of smart city initiatives. This infrastructure must be scalable enough to handle the ever-increasing influx of metadata coming from traffic management, security, smart lighting, waste management and many other systems without sacrificing performance.


GDPR 4th anniversary: the data protection lessons learned

“As GDPR races to retrofit new legislative ‘add ons’ that most technology companies will have evolved well beyond by the time they’re implemented, GDPR is barely an afterthought for marketing professionals who are readying themselves for a much more seismic change this year: the crumbling of third-party cookies,” he explained. “Because of that, advertisers will require new, privacy-respecting, non-tracking-based approaches to reach their target audiences. Now, then, is the time for businesses to establish what a value exchange between users and an ad-funded, free internet actually looks like – but that goes far beyond the remit of GDPR. To increase focus on privacy in commercial settings, McDermott believes that major stakeholders such as Google need to “lead the charge” and collaborate when it comes to establishing a best practice on data capture. “For the smaller businesses,” he added, “it’ll be about forming an allegiance with bigger technology companies who have the resources to navigate these changes so they can chart a course together.”


Where is attack surface management headed?

Organizations increasingly suffer from a lack of visibility, drown in threat intelligence overload, and suffer due to inadequate tools. This means they struggle to discover, classify, prioritize, and manage internet-facing assets, which leaves them vulnerable to attack and incapable of defending their organization proactively. As attack surfaces expand, organizations can’t afford to limit their efforts to just identify, discover, and monitor. They must improve their security management by adding continuous testing and validation. More can and should be done to make EASM solutions more effective and reduce the number of tools teams need to manage. Solutions must also blend legacy EASM with vulnerability management and threat intelligence. This more comprehensive approach addresses business and IT risk from a single solution. When vendors integrate threat intelligence and vulnerability management in an EASM solution, in addition to enabling lines of business within the organization to assign risk scores based on business value, the value increases exponentially. 



Quote for the day:

"The greatest good you can do for another is not just share your riches, but reveal to them their own." -- Benjamin Disraeli