Daily Tech Digest - June 06, 2022

How to Build a Data Science Enablement Team

Data scientists may use processes and tools you’re unfamiliar with, and those processes may not initially jibe with your own. For instance, data scientists may not think twice about emailing you code via Jupyter Notebooks. Or, they might use different versions of Python to create base images, with none in synchronization with each other. Consider offering alternatives to help them improve their workflows (and make your life a bit easier). For example, help them organize what they’re working on by setting up a Jupyter Hub instance or git repository. Making their jobs easier will help build the relationship. ... Most data scientists don’t want to become software developers any more than you probably want to become a data scientist. But bringing them into the DSET isn’t about getting them to learn more about software development — it’s about helping both you and them become more cognizant of the processes you both adhere to. So, while you’re empathizing with their work patterns, get them to understand how adopting some of your processes can help them in their daily workflows.


Feds Issue Alerts for Several Medical Device Security Flaws

The FDA in its alert for healthcare providers says the RUO devices are typically used in a development stage and are not for use in diagnostic procedures. But, it adds, many laboratories may be using the devices with tests for clinical diagnostic use. The vulnerabilities are exploitable remotely and have a low attack complexity, CISA says. The Illumina vulnerabilities involve path traversal, unrestricted upload of file with dangerous type, improper access control, and cleartext transmission of sensitive information. The vulnerabilities were scored as having CVSS v3 base scores of between 7.4 and 10.0. "Successful exploitation of these vulnerabilities may allow an unauthenticated malicious actor to take control of the affected product remotely and take any action at the operating system level," CISA warns. "An attacker could impact settings, configurations, software, or data on the affected product and interact through the affected product with the connected network." "Illumina has confirmed a security vulnerability affecting software in certain Illumina desktop sequencing instruments," the company says in a statement provided to Information Security Media Group. 


Crypto FUD: Quantum Computing Will Dwarf Blockchains’ Security

According to the research carried out by the team at Sussex, they concluded that only a supercomputer with a processing power of over 317 Quantum Bits could break down the SHA-256 algorithm in an hour or two. At the moment, the IBM supercomputer boasts around 127 qubits showing that it is still far behind the ‘possible’ processing power required to start causing damage to the Bitcoin algorithms. For Bitcoin’s blockchain to be broken, the supercomputer would need to perform a 50+1 attack involving taking over the blocks’ mining process. Bitcoin mining is done using special hardware called the Application Specific Integrated Circuits (ASICs), specifically made for the mining rigs. The circuits use a programming method/ hash function known as “puzzle friendliness,” where every input is expected to provide a good output, and if it doesn’t, then it is detected by the whole system, and the miner gets notified. That means the operation of the ASICs cannot begin to be tampered with by any computer without all miners working on the same block being notified concurrently. 


8 ways level of detail could improve digital twins

The architectural, engineering, and construction industry uses a related concept called Level of Development in Building Information Modeling (BIM) to characterize changes in technical design depth across a project’s development process. It describes the level to which planning teams have fleshed out the specifications, geometry and attached information. In the early stages, planning groups may just want to quickly estimate the overall cost and complexity of a project before proceeding. Later, domain experts such as electricians, plumbers and structural engineers can plan out exact gauges of wire and pipe in richer depth. These later levels of development can help plan orders and schedule the construction sequence so that teams do not interfere with each other. ... In good experience design, it is often helpful to guide a user’s attention to a particular detail. For example, it might be more beneficial to highlight the exact screws a repair technician needs to remove rather than render a scene in complete detail using an augmented reality overlay. Researchers believe that using LOD for glanceable interfaces could clarify complicated repairs and procedures. In musical concerts, visual augmentation with LOD could enhance the audience experience.


Considering digital trust: why zero trust needs a rethink

Knowing that digital trust is now critical for all businesses and organisations today; why has zero trust gained so much attention? Well, simply put, we can’t assume that we should trust everything, take a zero trust approach, then establish and maintain trust. From a security leader and CISO perspective, that means that we need to establish and maintain trust with all entities that make up and interact with the business. As such, digital trust here is the trust in machines, software, devices, and humans interacting with digital services that now power our world. It should not be confused with zero trust, which is often misinterpreted. The ‘zero’ implies no trust at all exists. Trust is dynamic, and it needs to be constantly upheld. The way enterprises approach establishing digital trust is important to ensure the functioning of the business, but specifically the security of both human and machine identities. While many organisations focused on zero trust initiatives over the past few years, many recognised that trust in humans and machines is the foundational layer. In the modern enterprise, security leaders must design solid identity-first security frameworks deeply rooted in cryptography for digital trust to be established.


Connected Healthcare Takes Huge Leap Forward

Business and IT leaders who ignore connected healthcare do so at their own peril. A study from Doctor.com found that 83% of patients using telemedicine plan to continue with it after the pandemic. In addition, 68% prefer to use their mobile phone to make appointments and handle other tasks, and 91% say that connected tech is valuable for managing prescriptions and compliance. At some point -- and there’s some indication that it’s already happening -- consumer companies like Apple, Withings, Ōura and Fitbit will steal away opportunities for new products and services. Already, drug store chains and smaller and more disruptive companies are establishing footholds, and new and innovative healthcare products are appearing. “There are growing opportunities for data and app-related services, apps, subscriptions and more but traditional healthcare providers often don’t see this,” Schooley points out. Establishing an IT foundation to support connected health is vital. Hall says this includes a cloud-first architecture, integrating IoT and edge technologies, focusing on data standards, building more sophisticated and interactive apps, exploring partnerships, and cultivating skillsets needed to support both innovation and operations.


The costs and damages of DNS attacks

A DNS attack does not just result in an inconvenient business disruption but can be a costly expense for organizations. In the past 12 months, APAC has become the region with the highest average cost of a successful attack at $1,036,040, an increase of 14% when compared to 2021, while EMEA and North America’s average cost of successful attack has decreased by 4% and 7% respectively. Malaysia (21%), Germany (18%) and both India and the UK (14% each) experienced the highest increase in the cost of an attack, while Spain saw its cost of damages plummet by almost half (48%) when compared to 2021. France and the US were the only other countries that saw a decline in the average cost with 21% and 5% respectively. Cybercriminals are continuing to use all available tools to gain access to networks, disrupt the business and steal data by specifically targeting the hybrid workforce, with DNS-based attacks becoming increasingly pervasive across all industries. In the last year, 70% of organizations suffered with in-house and cloud application downtime, with the average time to mitigate these threats increasing to 6 hours and 7 minutes, meaning that employees, partners, and customers were unable to access any services.


Government Agencies Seize Domains Used to Sell Credentials

"The actions executed by our international partners included the arrest of a main subject, searches of several locations, and seizures of the web server's infrastructure," according to the DOJ. In December 2020, Britain's National Crime Agency reported arrests of 21 individuals on suspicion of purchasing personally identifiable information from the WeLeakInfo website for a variety of purposes, including the buying and selling of malicious cyber tools such as remote access Trojans, aka RATs, as well as to buy "cryptors," which can be used to obfuscate code in malware, according to the NCA. It has said that all are men, ranging in age from 18 to 38 and the arrests took place over a five-week period starting in November 2020. Beyond the 21 people arrested by police, another 69 individuals in England, Wales and Northern Ireland have received warnings from the NCA or other domestic law enforcement agencies, saying they may have engaged in criminal activity tied to the investigation. Sixty of those individuals also received cease-and-desist orders from police.


The Value of Data Mobility for Modern Enterprises

Despite all the excitement about data analytics, it’s not a silver bullet. Turning data into real business value isn’t simply a matter of deploying all the right tools. To be sure, it requires some smart investment in good technology, but ultimately, it’s got to be about identifying high-value business cases and making sure that your business users have what they need to deliver positive outcomes. Business success is virtually always about compromise. For years, CTOs have grappled with the pros and cons of unified systems versus best-of-breed environments. They have weighed the advantages of diverse, purpose-built systems against the inherent value of a large-scale monolithic platform that offers a holistic approach to the business. In the end, best-of-breed won that battle. As a result, the problem of data silos became more pronounced. The hunger for real-time analytics has rendered the pain caused by data silos far more palpable. But there is good news; if we make the data from all those different systems available in a single place, we can have the best of both worlds.


Digital transformation: How to gain organizational buy-in

Data analytics does not always require data scientists. CIOs and IT leaders often reach a turning point when they discover that most employees can be trained to become resident data analytics subject experts. When employees combine new knowledge of data analysis with their existing knowledge of the processes or machines, they can quickly be at the forefront of a digital journey. This is welcome news to most IT leaders, simply because the demand for skillsets in data science and cybersecurity has skyrocketed. Upskilling existing team members can be critical in attaining sustained adoption and continuous improvements of digital solutions. This includes long-term improvements in employee engagement and retention, increased cross-functional collaboration, and adoption of modern technology trends. Along with their technical skills, employees need to be skilled at diagnostics and problem-solving using the data now readily available to them. Employees who may have previously been data-gatherers can shift to become problem-solvers based on new data-driven insights. Make sure your employees are ready to learn and grow to take advantage of these opportunities.



Quote for the day:

"The essence of leadership is the willingness to make the tough decisions. Prepared to be lonely." -- Colin Powell

Daily Tech Digest - June 05, 2022

How the Web3 stack will automate the enterprise

Web3 is only partially in existence within enterprises but is already making an incredible impact and altering strategies. Cross River Bank, which just raised $620 million at a $3 billion valuation, powers embedded payments, cards, lending, and crypto solutions for over 80 leading technology partners. Cross River CEO Giles Gade’s plan is to start offering more crypto-related products and services, gearing towards a crypto-first strategy. Investors are excited by the opportunity. “As Web3 continues to gain mindshare of consumers and businesses alike, we believe Cross River sits in a unique position to serve as the infrastructure and interconnective tissue between the traditional and regulated centralized financial system, as it transitions slowly to a decentralized one,” said Lior Prosor, General Partner and Co-founder of Hanaco Ventures in the Cross River press release. In many ways, this time is no different than when financial institutions and VCs saw the disruptive potential by investing in FinTech innovation – analog to digital – years prior. If FinTech is the blending of technology and finance, Web3 is the merging of crypto with the web.

Demystifying the Metrics Store and Semantic Layer

First, many critical data assets end up isolated on local servers, data centers and cloud services. Unifying them poses a significant challenge. Often, there are also no standardized data and business definitions, and this adds to the difficulty for businesses to tap into the full value of their data. As companies embark on new data management projects, they need to address these concerns; however, many have chosen to avoid this issue for one reason or another. This results in new data silos across the business. Second, as every data warehouse practitioner is aware, it’s difficult for most business users to interpret the data in the warehouse. Because technical metadata like table names, column names and data types are typically worthless to business users, data warehouses aren’t enough when it comes to allowing users to conduct analysis on their own. From a business user’s perspective, what can be done to solve this problem? Two popular solutions are metrics stores and semantic layers, but which is the best approach? And what’s the difference between them?


Why HR plays an important role in preventing cyber attacks

HR staff members often work with legal counsel on security policies, including the creation, maintenance and enforcement of acceptable usage policies. Since HR staff communicates frequently with employees, they are well positioned to share information about security and privacy expectations and often already work to keep security topics top-of-mind for employees. ... As with security policy work, HR professionals are often a valuable part of compliance-related initiatives because certain aspects of state, federal and international privacy and security compliance regulations require HR expertise. This is particularly true for larger organizations that have office locations or employees in multiple countries. HR may work on the creation of processes including user onboarding and offboarding, security awareness and training, and the steps for incident response once a crisis occurs. ... Some HR professionals already serve on their IT and security governance committee, as it's only natural that HR should help get the word out on security and assist with policy creation and administration when needed.


7 Reasons Why Serverless Encourages Useful Engineering Practices

They are easier to change. After reading the book “The Pragmatic Programmer”, I realized that making your software easy to change is THE de-facto principle to live by as an IT professional. For instance, when you leverage functional programming with pure (ideally idempotent) functions, you always know what to expect as input and output. Thus, modifying your code is simple. If written properly, serverless functions encourage code that is easy to change and stateless. They are easier to deploy — if the changes you made to an individual service don’t affect other components, redeploying a single function or container should not disrupt other parts of your architecture. This is one reason why many decide to split their Git repositories from a “monorepo” to one repository per service. With serverless, you are literally forced to make your components small. For instance, you cannot run any long-running processes with AWS Lambda (at least for now). At the time of writing, the maximum timeout configuration doesn’t allow for any process that takes longer than 15 minutes. 



WTF is a Service Mesh?

The internal workings of a Service Mesh are conceptually fairly simple: every microservice is accompanied by its own local HTTP proxy. These proxies perform all the advanced functions that define a Service Mesh (think about the kind of features offered by a reverse proxy or API Gateway). However, with a Service Mesh this is distributed between the microservices—in their individual proxies—rather than being centralised. In a Kubernetes environment these proxies can be automatically injected into Pods, and can transparently intercept all of the microservices’ traffic; no changes to the applications or their Deployment YAMLs (in the Kubernetes sense of the term) are needed. These proxies, running alongside the application code, are called sidecars. These proxies form the data plane of the Service Mesh, the layer through which the data—the HTTP requests and responses—flow. This is only half of the puzzle though: for these proxies to do what we want they all need complex and individual configuration. Hence a Service Mesh has a second part, a control plane.


Best Practices for Deploying Language Models

We’re recommending several key principles to help providers of large language models (LLMs) mitigate the risks of this technology in order to achieve its full promise to augment human capabilities. While these principles were developed specifically based on our experience with providing LLMs through an API, we hope they will be useful regardless of release strategy (such as open-sourcing or use within a company). We expect these recommendations to change significantly over time because the commercial uses of LLMs and accompanying safety considerations are new and evolving. We are actively learning about and addressing LLM limitations and avenues for misuse, and will update these principles and practices in collaboration with the broader community over time. We’re sharing these principles in hopes that other LLM providers may learn from and adopt them, and to advance public discussion on LLM development and deployment.


A cybersecurity expert explains why it would be so hard to obscure phone data in a post-Roe world

There’s not a whole lot users can do to protect themselves. Communications metadata and device telemetry – information from the phone sensors – are used to send, deliver and display content. Not including them is usually not possible. And unlike the search terms or map locations you consciously provide, metadata and telemetry are sent without you even seeing it. Providing consent isn’t plausible. There’s too much of this data, and it’s too complicated to decide each case. Each application you use – video, chat, web surfing, email – uses metadata and telemetry differently. Providing truly informed consent that you know what information you’re providing and for what use is effectively impossible. If you use your mobile phone for anything other than a paperweight, your visit to the cannabis dispensary and your personality – how extroverted you are or whether you’re likely to be on the outs with family since the 2016 election – can be learned from metadata and telemetry and shared.


Three Architectures That Could Power The Robotic Age With Autonomous Machine Computing

Similar to other information technology stacks, the autonomous machine computing technology stack consists of hardware, systems software and application software. Sitting in the middle of this technology stack is computer architecture, which defines the core abstraction between hardware and software. The existence of this abstraction layer allows software developers to focus on optimizing the software to fully utilize the underlying hardware to develop better applications as well as to achieve higher performance and higher energy efficiency. This abstraction layer also allows hardware developers to focus on developing faster, more affordable, more energy-efficient hardware that can unlock the imagination of software developers. ... Hence, computer architecture is essential to information technology. For instance, in the personal computing era, x86 has become the dominant computer architecture due to its superior performance. In the mobile computing era, ARM has become the dominant computer architecture due to its superior energy efficiency. 


Datadog finds serverless computing is going mainstream

Serverless represents the ideal state of cloud computing, where you only use exactly what resources you need and no more. That’s because the cloud provider delivers only those resources when a specific event happens and shuts it down when the event is over. It’s not a lack of servers, so much as not having to deploy the servers because the provider handles that for you in an automated fashion. When people began talking about cloud computing around 2008, one of the advantages was elastic computing, or only using what you need, scaling up or down as necessary. In reality, developers don’t know what they’ll need, so they’ll often overprovision to make sure the application stays up and running. The company created the report based on data running through its monitoring service. While it represents only the activity from its customers, Rabinovitch sees it as quality data given the broad range of customers it has using its services. “We do think we’re well represented across the industry, and we believe that we’re representative of real production workloads,” he said.


How Platform Engineering Helps Manage Innovation Responsibly

Platform engineering, then, is a support function. If it enables, it does so by reducing complexity and making it easier for developers and other technical teams to achieve their objectives. Moreover, one of the advantages of having a platform engineering team is that it can balance competing needs and aims — like, for example, developer experience and security — in a way that ensures engineering capabilities and commercial imperatives are properly aligned. Calling it a “support function” might not sound particularly sexy, but it nevertheless suggests that organizations are maturing in their approach to software development. It’s no longer the locus of moving fast and breaking things, but instead recognized as something that requires care and stewardship. But this implies responsibility — and that, to invert the old adage, carries considerable power. This means that platform engineering can become a political beast within organizations. If it can shape the way developers work, it can inevitably play a part in the direction of a whole technology strategy.



Quote for the day:

"Leadership is developed daily, not in a day." -- John C. Maxwell

Daily Tech Digest - June 02, 2022

A decentralized verification system could be the key to boosting digital security

Instead of placing trust in a single central entity, decentralization places trust in the network as a whole, and this network can exist outside of the IAM system using it. The mathematical structure of the algorithms underpinning the decentralized authority ensures that no single node can act alone. Moreover, each node on the network can be operated by an independently operating organization, such as a bank, telecommunication company, or government departments. So, stealing a single secret would require hacking several independent nodes. Even in the event of an IAM system breach, the attacker would only gain access to some user data – not the entire system. And to award themselves authority over the entire organization, they would need to breach a combination of 14 independently operating nodes. This isn’t impossible, but it’s a lot harder. But beautiful mathematics and verified algorithms still aren’t enough to make a usable system. There’s more work to be done before we can take decentralized authority from a concept to a functioning network that will keep our accounts safe.


Emerging digital twins standards promote interoperability

Digital twins today are mostly application-driven. “But what we really need is the interoperable digital twin so we can realize the interoperability between these different digital twins,” said Christian Mosch, general manager at IDTA. The IDTA Asset Administration Shell standard provides a framework for sharing data across the different lifecycle phases such as planning, development, construction, commissioning, operation and recycling at the end of life. It provides a way of thinking about assets such as a robot arm and the administration of the different data and documents that describe it across various lifecycle phases. The shell provides a container for consistently storing different types of information and documentation. For example, the robot arm might include engineering data such as 3D geometry drawings, design properties and simulation results. It may also include documentation such as declarations of conformity and proof certifications. The Asset Administration Shell also brings data from operations technology used to manage equipment on the shop floor into the IT realm to represent data across the lifecycle. 


4 Database Access Control Methods to Automate

The beauty of using security automation as a data broker is that it has the ability to validate data-retrieval requests. This includes verifying that the requestor actually has permission to see the data being requested. If the proper permissions aren’t in place, the user can submit a request to be added to a specific role through the normal request channels, which is typically the way to go. With automated data access control, this request could be generated and sent within the solution to streamline the process. This also allows additional context-specific information to be included in the data-access request automatically. For example, if someone requests data that they do not have access to within their role, the solution can be configured to look up the database owner, populate an access request and send it to the owner of the data, who can then approve one-time access or grant access for a certain period of time. A common scenario where this is useful is when an employee goes on vacation and someone new is helping with their clients’ needs while they are out.


AI still needs humans to stay intelligent—here’s why

Remember, AI models are usually programmes or algorithms built to use data to recognise patterns, and either reach a conclusion or make a prediction. Once designed, paid for, and implemented, it’s easy to assume that these models will stay smart forever. Instead, they nearly always require regular human intervention. Why? Let’s look at a few examples: It’s likely that the technology your organisation uses in day-to-day operations is regularly changed and upgraded; Your company might have uncovered new intelligence about your customers, such as levels of interaction with a recently launched product; Your business’ strategies may change – for example, you might switch focus from reducing production costs to investing in a quality customer experience.  ... Where possible, avoid ‘technical debt’ by focusing on gradual AI improvements, rather than waiting for an issue to flare up and then facing a gruelling system overhaul. And finally, strive to create an AI-aware culture in your workplace. Educate your employees on how your AI systems work, why they’re reliable, why they’re to be trusted rather than feared – and that they’re not a replacement for their jobs.


Massive shadow code risk for world’s largest businesses

“While retail and credit card breaches grab the most headlines, this is a pervasive and relatively unchecked risk to both security and privacy across all verticals,” said Dan Dinnar, CEO of Source Defense. “It’s also a fast-growing and extremely volatile issue with regard to sensitive data. Organizations and their digital supply chain partners are constantly updating sites and code, and the data of greatest value to malicious actors is collected on the pages where the business has the greatest need for analytics, tag management, and other tracking and management capabilities.” Extensive libraries of third-party scripts are available free, or at low cost, from a range of communities, organizations, and even individuals, and are extremely popular as they allow development teams to quickly add advanced functionality to applications without the burden of creating and maintaining them. These packages also often contain code from additional parties further removed from – and farther out of the purview of – the deploying organization.


High-tech legislation through self-regulation

In industries where no direct legislation exists, judges have to rely on a multitude of secondary factors, putting additional strain on them. In some cases, they might be left only with the general principles of law. In web scraping, data protection laws, e.g. GDPR, became the go-to area for related cases. Many of them have been decided on the basis of these regulations and rightfully so. But scraping is much more than just data protection. Case law, mostly from the US, has in turn been used as one of the fundamental parts that have directed the way for our current understanding of the legal intricacies of web scraping. Although, regretfully, that direction isn’t set in stone. Yet, using such indirect laws and practices to regulate an industry, even with the best intentions, can lead to unsatisfying outcomes. A majority of the publicly accessible data is being held by specific companies, particularly social media websites. Social media companies and other data giants will do everything in their power to protect the data they hold. Unfortunately, they might sometimes go too far when protecting personal data.


Why AI Ethics Is Even More Important Now

AI ethics stems from a company's values. Those values should be reflected in the company's culture as well as how the company utilizes AI. One cannot assume that technologists can just build or implement something on their own that will necessarily result in the desired outcome(s). "You cannot create a technological solution that will prevent unethical use and only enable the ethical use," said Forrester's Carlsson. "What you need actually is leadership. You need people to be making those calls about what the organization will and won't be doing and be willing to stand behind those, and adjust those as information comes in." Translating values into AI implementations that align with those values requires an understanding of AI, the use cases, who or what could potentially benefit and who or what could be potentially harmed. "Most of the unethical use that I encounter is done unintentionally," said Forrester's Carlsson. " Of the use cases where it wasn't done unintentionally, usually they knew they were doing something ethically dubious and they chose to overlook it." Part of the problem is that risk management professionals and technology professionals are not yet working together enough.


Digital transformation: 5 ways to create a realistic strategy

Understand that digital transformation doesn’t just happen in the IT department; it happens in the C-suite, in cubicles, and in home offices. That means all stakeholders need to be aligned and in agreement with your company’s digital transformation goal. The directive must come from management, but the work will happen throughout the company, often precipitating a major cultural shift toward new technologies and processes. In such cases, training and change management might be necessary to make users feel more comfortable with the new tools and processes. Leaders need to ensure that their teams are on board with the direction the company is moving in, and they should be willing to listen to feedback as the organization continues along its journey. What that plan looks like is up to you. Digital transformation is different for everyone, and every company has its own objectives. Meeting those objectives can be daunting. But by setting a goal, performing an assessment, breaking your plan into manageable pieces, budgeting realistically, and getting everyone to buy in, you will succeed.


Three ways to prevent hybrid work from breaking your company culture

Companies need to take a hard look at the current environment and gauge how effectively it supports different types of work. Many aspects of office design are based on convention rather than deliberate thought. One analysis found that building thermostats typically have been calibrated for the comfort of men who are 40 years old and weigh approximately 154 pounds, which is cooler than is comfortable for most women. That norm was established decades ago and never updated. Just about every physical feature of the office can be made more conducive to hybrid work. Technology such as an online whiteboard for meetings, smart cameras that automatically pan to people as they talk, and virtual receptionists help to bridge the gap between virtual and in-office workforces. ... Last, leaders must set employees up for success. These support mechanisms can be quite diverse. The insurance company mentioned above, for instance, created training programs to give its employees the right skills to succeed in a hybrid workplace. These included tactical help on new technology, along with training for managers on effective virtual coaching conversations.


Why the Dual Operating Model Impedes Enterprise Agility

In the traditional organization, waiting for things (or queueing) is the norm: waiting for people to respond to emails, waiting days or weeks for a meeting because that’s the first open time on everyone’s calendar, or waiting for someone else to finish their part of a project so you can start yours. But waiting is death for agile teams; it wastes valuable time and diverts their focus. And when I say "death", I am not exaggerating for effect. Waiting makes agile teams ineffective, and over time it will kill the agile team’s ability to get things done. If an agile team has to wait every time it needs something from the rest of the organization, pretty soon it will act just like any other team. This is one reason why agile teams only seem to work on new initiatives that are completely disconnected from the existing organization: so long as they don’t have to interact with the rest of the organization, so long as they are completely self-contained, they don’t waste time waiting and they can work in an agile way. But once they need expertise or authority they don’t have, it all starts to fall apart.



Quote for the day:

"Being defeated is often a temporary condition. Giving up is what makes it permanent." -- Marilyn Vos Savant

Daily Tech Digest - May 26, 2022

4 Reasons to Shift Left and Add Security Earlier in the SDLC

Collaboration is critical for the security and development teams, especially when timelines have to change. The security operations center (SOC) team may need to train on cloud technologies and capabilities, while the cloud team may need help understanding how the organization performs risk management. Understanding the roles and responsibilities of these teams and the security functions each fulfill is critical to managing security risks. In some scenarios, security teams can act as enablers for cloud engineering, teaching teams how to be self-sufficient in performing threat-modeling exercises. In other situations, security teams can act as escalation paths during security incidents. Last, security teams can also own and operate underlying platforms or libraries that provide contextual value to more stream-oriented cloud engineering teams, such as IAC scanning capabilities, shared libraries for authentication and monitoring, and support of workloads constructs, such as secure service meshes.


We have bigger targets than beating Oracle, say open source DB pioneers

The pitching of open source against Oracle's own proprietary database has shifted as the market has moved on and developers lead a database strategy building a wide range of applications in the cloud, rather than a narrower set of business applications. Zaitsev pointed out that if you look at the rankings on DB-Engines, which combines mentions, job ads and social media data, Oracle is always the top RDBMS. But a Stack Overflow survey would not even in put Oracle in the top five. So as developers are concerned, the debate about whether Oracle is the enemy is over. "The reality is, the majority of developers — especially good developers — prefer open source," he said. ... "There's a lot of companies now who are basically saying, 'Forget the Oracle API, I want to standardise on the PostgreSQL API.' They don't even want a non-PostgreSQL API because they see it is a growing market and opportunity with additional cost savings, flexibility, and continual innovation," he said, also speaking at Percona Live. "Years ago, if you had to rewrite your application from Oracle to PostgreSQL, that was a negative, that was a cost to you. ..."


Ultrafast Computers Are Coming: Laser Bursts Drive Fastest-Ever Logic Gates

The researchers’ advances have opened the door to information processing at the petahertz limit, where one quadrillion computational operations can be processed per second. That is almost a million times faster than today’s computers operating with gigahertz clock rates, where 1 petahertz is 1 million gigahertz. “This is a great example of how fundamental science can lead to new technologies,” says Ignacio Franco, an associate professor of chemistry and physics at Rochester who, in collaboration with doctoral student Antonio José Garzón-Ramírez ’21 (PhD), performed the theoretical studies that lead to this discovery. ... The ultrashort laser pulse sets in motion, or “excites,” the electrons in graphene and, importantly, sends them in a particular direction—thus generating a net electrical current. Laser pulses can produce electricity far faster than any traditional method—and do so in the absence of applied voltage. Further, the direction and magnitude of the current can be controlled simply by varying the shape of the laser pulse (that is, by changing its phase).


A computer cooling breakthrough uses a common material to boost power 740 percent

Researchers at the University of Illinois at Urbana-Champaign (UIUC) and the University of California, Berkeley (UC Berkeley) have recently devised an invention that could cool down electronics more efficiently than other alternative solutions and enable a 740 percent increase in power per unit, according to a press release by the institutions published Thursday. Tarek Gebrael, the lead author of the new research and a UIUC Ph.D. student in mechanical engineering, explained that current cooling solutions have three specific problems. "First, they can be expensive and difficult to scale up," he said. He brought up the example of heat spreaders made of diamonds which are obviously very expensive. Second, he described how conventional heat spreading approaches generally place the heat spreader and a heat sin (a device for dissipating heat efficiently) on top of the electronic device. Unfortunately, "in many cases, most of the heat is generated underneath the electronic device," meaning that the cooling mechanism isn't where it is needed most.


Tech firms are making computer chips with human cells – is it ethical?

Cortical Labs believes its hybrid chips could be the key to the kinds of complex reasoning that today’s computers and AI cannot produce. Another start-up making computers from lab-grown neurons, Koniku, believes their technology will revolutionise several industries including agriculture, healthcare, military technology and airport security. Other types of organic computers are also in the early stages of development. While silicon computers transformed society, they are still outmatched by the brains of most animals. For example, a cat’s brain contains 1,000 times more data storage than an average iPad and can use this information a million times faster. The human brain, with its trillion neural connections, is capable of making 15 quintillion operations per second. This can only be matched today by massive supercomputers using vast amounts of energy. The human brain only uses about 20 watts of energy, or about the same as it takes to power a lightbulb. It would take 34 coal-powered plants generating 500 megawatts per hour to store the same amount of data contained in one human brain in modern data storage centres.


SolarWinds: Here's how we're building everything around this new cybersecurity strategy

Now, SolarWinds uses a system of parallel builds, where the location keeps changing, even after the project has been completed and shipped. Much of this access is only provided on a need-to-know basis. That means if an attacker was ever able to breach the network, there's a smaller window to poison the code with a malicious build. "What we're really trying to achieve from a security standpoint is to reduce the threat window, providing the least amount of time possible for a threat actor to inject malware into our code," said Ramakrishna. But changing the process of how code is developed, updated and shipped isn't going to help prevent cyberattacks alone, which is why SolarWinds is now investing heavily in many other areas of cybersecurity. These areas include the likes of user training and actively looking for potential vulnerabilities in networks. Part of this involved building up a red team, cybersecurity personnel who have the job of testing network defences and finding potential flaws or holes that could be abused by attackers – crucially before the attackers find them.


How to stop your staff ignoring cybersecurity advice

While regular reminders are great, if you deliver the same message repeatedly, there is a danger that staff will zone out and ultimately become disengaged with the process. We’ve seen clear evidence of this over the past year, with awareness of key phrases falling, sometimes significantly. In this year’s State of the Phish Report, just over half (53%) of users could correctly define phishing, down from 63% the previous year. Recognition also fell across common terms like malware (down 2%) and smishing (down 8%). Ransomware(opens in new tab) was the only term to see an increase in understanding, yet only 36% could correctly define the term. ... Cybersecurity training may not sound like most people’s idea of fun, but there are plenty of ways to keep it positive and even enjoyable. Deliver training in short sharp models, and don’t be afraid to use different approaches such as animation or humor if it fits well into your company culture. Making security training competitive and turning it into a game can also aid the process. The gamification of training modules has been shown to increase engagement and motivation, as well as improving attainment scores in testing.


Why are current cybersecurity incident response efforts failing?

A risk-based approach to incident response enables enterprises to prioritize vulnerabilities and incidents based on the level of risk they pose to an organization. The simplest way of framing risk is a calculation on frequency of occurrence and severity. Malware frequently reaches endpoints, and response and clean-up can cost thousands of dollars (both directly and in lost productivity). Furthermore – and security teams all over the world would agree on this – vulnerabilities on internet-facing systems must be prioritized and remediated first. Those systems are continuously under attack, and as the rate of occurrence starts to approach infinity, so does risk. Similarly, there have been many threat groups that have costed enterprises millions directly, and in some cases tens of millions in lost operations and ERP system downtime. Large enterprises measure the cost of simple maintenance windows in ERP systems in tens of millions. Thus, it’s difficult to imagine the substantial calculations on a business-critical application breach. As severity increases to that order of magnitude, so does risk.


3 Must-Have Modernization Competencies for Application Teams

To decide the best path forward, leverage Competency #1. Architects and decision-makers should begin with automated architectural assessment tools to assess the technical debt of their monolithic applications, accurately identify the source of that debt, and measure its negative impact on innovation. These insights will help teams early in the cloud journey to determine the best strategy moving forward. Using AI-based modernization solutions, architects can exercise Competency #2 and automatically transform complex monolithic applications into microservices — using both deep domain-driven observability via a passive JVM agent and sophisticated static analysis‚ by analyzing flows, classes, usage, memory and resources to detect and unearth critical business domain functions buried within a monolith. Whether your application is still on-premises or you have already lifted and shifted to the cloud (Competency #3), the world’s most innovative organizations are applying vFunction on their complex “megaliths” to untangle complex, hidden and dense dependencies for business-critical applications that often total over 10 million lines of code and consist of thousands of classes.


The surprising upside to provocative conversations at work

To be sure, supporting and encouraging sensitive conversations isn’t easy. However, leaders can create the right conditions by establishing norms, offering resources, and helping ensure that these conversations happen in safe environments, with ground rules about avoiding judgment or trying to persuade people to change their minds. Critically, employees should always have the option to just show up and listen to better understand how colleagues are impacted by something happening in the world. The objective of these conversations should definitely not be to reach solutions or generate consensus. In that way, fostering these conversations is a growth opportunity for senior executives as well, who are often much more comfortable in problem-solving mode. The leader’s role here is to help the company bring meaning, humanity, and social impact to the workforce—not to deliver answers. The main takeaway for senior leaders is that you can’t isolate employees from the issues of the world. You can, however, help them sort through those issues and create a more welcoming, inclusive environment in which people are free to be their authentic selves—and maybe even learn from their colleagues.



Quote for the day:

"Cream always rises to the top...so do good leaders" -- John Paul Warren

Daily Tech Digest - May 25, 2022

Into the Metaverse: How Digital Twins Can Change the Business Landscape

With hybrid work becoming the norm, the mapping technology to build and manage workplace digital twins could also make it easier for startups to enter the market. New businesses that would otherwise need to invest in corporate real estate can achieve virtual flexibility at a lower cost. Because real-time mapping affords visualization of indoor assets, managers of airports or hospitals, for instance, can view multiple floors, entrances, stairwells and rooms to watch what's happening and where. We will likely see crossover in how this in-the-moment tracking of equipment and resources plays out in the metaverse and in the real world. ... While the metaverse will likely represent an avenue of escape and entertainment for many, there's the potential for it to be a valuable business tool with the capability to offer real-world simulations. It's something one consultant has been doing on such a scale as to mimic the effects of global warming and show how it will disrupt businesses and entire cities. Experiencing one's own replicated neighborhood relative to rising seas, encroaching storms and more, offers a visceral, relatable experience more likely to motivate action.


Infra-as-Data vs. Infra-as-Code: What’s the Difference?

On a high level, Infrastructure-as-Data tools like VMware’s Idem and Ansible, and Infrastructure-as-Code, dominated by Terraform, were created to help DevOps teams achieve their goals of simplifying and automating application deployments across multicloud and different environments, while helping to reduce manual configurations and processes. ... When cloud architectures need to be expressed using code, “you’re just writing more and more and more and more Terraform,” he said. “Idem is different from how you generally think of Infrastructure as Code — everything boils down to these predictable datasets.” “Instead of sitting down and saying, ‘I’m going to write out a cloud in Terraform,’ you can point Idem towards your cloud, and it will automatically generate all of the data and all of the code and the runtimes to enforce it in its current state.” At the same time, Idem, as well as Ansible to a certain extent, were designed to make cloud provisioning more automated and simple to manage.


How to develop competency in cyber threat intelligence capabilities

It is necessary to understand operating systems and networks principles at all levels: File storage, access management, log files policies, security policies, protocols used to share information between computers, et cetera. The core concepts, components and conventions associated with cyberdefense and cybersecurity should be identified, and a strong knowledge of industry best practices and frameworks is mandatory. Another core tenet is how defensive approaches and technology align to at least one of the five cyber defense phases: Identify, protect, detect, respond and recover. Key concepts to know here are identity and access management and control, network segmentation, cryptography use cases, firewalls, endpoint detection and response. signature and behavior based detections, threat hunting and incident response, and red and purple teams. One should develop a business continuity plan, disaster recovery plan and incident response plan. ... This part is all about understanding the role and responsibilities of everyone involved: Reverse engineers, security operation center analysts, security architects, IT support and helpdesk members, red/blue/purple teams, chief privacy officers and more.


Build collaborative apps with Microsoft Teams

Teams Toolkit for Visual Studio, Visual Studio Code, and command-line interface (CLI) are tools for building Teams and Microsoft 365 apps, fast. Whether you’re new to Teams platform or a seasoned developer, Teams Toolkit is the best way to create, build, debug, test, and deploy apps. Today we are excited to announce the Teams Toolkit for Visual Studio Code and CLI is now generally available (GA). Developers can start with scenario-based code scaffolds for notification and command-and-response bots, automate upgrades to the latest Teams SDK version, and debug apps directly to Outlook and Office. ... Microsoft 365 App Compliance Program is designed to evaluate and showcase the trustworthiness of application-based industry standards, such as SOC 2, PCI DSS, and ISO 27001 for security, privacy, and data handling practices. We are announcing the preview of the App Compliance Automation Tool for Microsoft 365 for applications built on Azure to help them accelerate the compliance journey of their apps.


How API gateways complement ESBs

In the modern IT landscape, service development has moved toward an API-first and spec-first approach. IT environments are also becoming increasingly distributed. After all, organizations are no longer on-premises or even cloud-only, but working with hybrid cloud and multicloud environments. And their teams are physically distributed, too. Therefore, points of integration must be able to span various types of environments. The move toward microservices is fundamentally at odds with the traditional, monolithic ESB. By breaking down the ESB monolith into multiple focused services, you can retain many of the ESB’s advantages while increasing flexibility and agility. ... As API standards have matured, the API gateway can be leaner than an ESB, focused specifically on cross-cutting concerns. Additionally, the API gateway is focused primarily on client-service communication, rather than on all service-to-service communication. This specificity of scope allows API gateways to avoid scope creep, keeping them from becoming yet another monolith that needs to be broken down. When selecting an API gateway, it is important to find a product with a clear identity rather than an extensive feature set.


Artificial intelligence is breaking patent law

Inventions generated by AI challenge the patent system in a new way because the issue is about ‘who’ did the inventing, rather than ‘what’ was invented. The first and most pressing question that patent registration offices have faced with such inventions has been whether the inventor has to be human. If not, one fear is that AIs might soon be so prolific that their inventions could overwhelm the patent system with applications. Another challenge is even more fundamental. An ‘inventive step’ occurs when an invention is deemed ‘non-obvious’ to a ‘person skilled in the art’. This notional person has the average level of skill and general knowledge of an ordinary expert in the relevant technical field. If a patent examiner concludes that the invention would not have been obvious to this hypothetical person, the invention is a step closer to being patented. But if AIs become more knowledgeable and skilled than all people in a field, it is unclear how a human patent examiner could assess whether an AI’s invention was obvious. An AI system built to review all information published about an area of technology before it invents would possess a much larger body of knowledge than any human could.


SIM-based Authentication Aims to Transform Device Binding Security to End Phishing

The SIM card has a lot going for it. SIM cards use the same highly secure, cryptographic microchip technology that is built into every credit card. It's difficult to clone or tamper with, and there is a SIM card in every mobile phone – so every one of your users already has this hardware in their pocket. The combination of the mobile phone number with its associated SIM card identity (the IMSI) is a combination that's difficult to phish as it's a silent authentication check. The user experience is superior too. Mobile networks routinely perform silent checks that a user's SIM card matches their phone number in order to let them send messages, make calls, and use data – ensuring real-time authentication without requiring a login. Until recently, it wasn't possible for businesses to program the authentication infrastructure of a mobile network into an app as easily as any other code. tru.ID makes network authentication available to everyone. ... Moreover, with no extra input from the user, there's no attack vector for malicious actors: SIM-based authentication is invisible, so there's no credentials or codes to steal, intercept or misuse.


How to Manage Metadata in a Highly Scalable System

The realization that current data architectures can no longer support the needs of modern businesses is driving the need for new data engines designed from scratch to keep up with metadata growth. But as developers begin to look under the hood of the data engine, they are faced with the challenge of enabling greater scale without the usual impact of compromising storage performance, agility and cost-effectiveness. This calls for a new architecture to underpin a new generation of data engines that can effectively handle the tsunami of metadata and still make sure that applications can have fast access to metadata. Next-generation data engines could be a key enabler of emerging use cases characterized by data-intensive workloads that require unprecedented levels of scale and performance. For example, implementing an appropriate data infrastructure to store and manage IoT data is critical for the success of smart city initiatives. This infrastructure must be scalable enough to handle the ever-increasing influx of metadata coming from traffic management, security, smart lighting, waste management and many other systems without sacrificing performance.


GDPR 4th anniversary: the data protection lessons learned

“As GDPR races to retrofit new legislative ‘add ons’ that most technology companies will have evolved well beyond by the time they’re implemented, GDPR is barely an afterthought for marketing professionals who are readying themselves for a much more seismic change this year: the crumbling of third-party cookies,” he explained. “Because of that, advertisers will require new, privacy-respecting, non-tracking-based approaches to reach their target audiences. Now, then, is the time for businesses to establish what a value exchange between users and an ad-funded, free internet actually looks like – but that goes far beyond the remit of GDPR. To increase focus on privacy in commercial settings, McDermott believes that major stakeholders such as Google need to “lead the charge” and collaborate when it comes to establishing a best practice on data capture. “For the smaller businesses,” he added, “it’ll be about forming an allegiance with bigger technology companies who have the resources to navigate these changes so they can chart a course together.”


Where is attack surface management headed?

Organizations increasingly suffer from a lack of visibility, drown in threat intelligence overload, and suffer due to inadequate tools. This means they struggle to discover, classify, prioritize, and manage internet-facing assets, which leaves them vulnerable to attack and incapable of defending their organization proactively. As attack surfaces expand, organizations can’t afford to limit their efforts to just identify, discover, and monitor. They must improve their security management by adding continuous testing and validation. More can and should be done to make EASM solutions more effective and reduce the number of tools teams need to manage. Solutions must also blend legacy EASM with vulnerability management and threat intelligence. This more comprehensive approach addresses business and IT risk from a single solution. When vendors integrate threat intelligence and vulnerability management in an EASM solution, in addition to enabling lines of business within the organization to assign risk scores based on business value, the value increases exponentially. 



Quote for the day:

"The greatest good you can do for another is not just share your riches, but reveal to them their own." -- Benjamin Disraeli

Daily Tech Digest - May 24, 2022

7 machine identity management best practices

When keys and certificates are static, it makes them ripe targets for theft and reuse, says Anusha Iyer, co-founder and CTO at Corsha, a cybersecurity vendor. "In fact, credential stuffing attacks have largely shifted from human username and passwords to API credentials, which are essentially proxies for machine identity today," she says. As API ecosystems are seeing immense growth, this problem is only becoming more challenging. Improper management of machine identities can lead to security vulnerabilities, agrees Prasanna Parthasarathy, senior solutions manager at the Cybersecurity Center of Excellence at Capgemini Americas. In the worst case, attackers can wipe out entire areas in the IT environment all at once, he says. "Attackers can use known API calls with a real certificate to gain access to process controls, transactions, or critical infrastructure – with devastating results." To guard against this, companies should have strict authorization of the source machines, cloud connections, application servers, handheld devices, and API interactions, Parthasarathy says. Most importantly, trusted certificates should not be static, he says.


Kalix: Build Serverless Cloud-Native Business-Crtical Applications with No Databases

Kalix aims to provide a simple developer experience for modelling and building stateful and stateless cloud-native, along with a NoOps experience, including a unified way to do system design, deployment, and operations. In addition, it provides a Reactive Runtime that delivers ultra-low latency with high resilience by continuously optimizing data access, placement, locality, and replication. When using currently available Functions-as-a-Service (FaaS) offerings, application developers need to learn and manage many different SDKs and APIs to build a single application. Each component brings its own feature set, semantics, guarantees, and limitations. In contrast, Kalix provides a unifying application layer that pulls together the necessary pieces. These include databases, message brokers, caches, service meshes, API gateways, blob storages, CDN networks, CI/CD products, etc. Kalix exposes them into one single unified programming model, abstracting the implementation details from its users. By bringing all of these components into a single package, developers don't have to set up and tune databases, maintain and provision servers, and configure clusters, as the Kalix platform handles this.


Snake Keylogger Spreads Through Malicious PDFs

The campaign—discovered by researchers at HP Wolf Security—aims to dupe victims with an attached PDF file purporting to have information about a remittance payment, according to a blog post published Friday. Instead, it loads the info-stealing malware, using some tricky evasion tactics to avoid detection. “While Office formats remain popular, this campaign shows how attackers are also using weaponized PDF documents to infect systems,” HP Wolf Security researcher Patrick Schlapfer wrote in the post, which opined in the headline that “PDF Malware Is Not Yet Dead.” Indeed, attackers using malicious email campaigns have preferred to package malware in Microsoft Office file formats, particularly Word and Excel, for the past decade, Schlapfer said. In the first quarter of 2022 alone, nearly half (45 percent) of malware stopped by HP Wolf Security used Office formats, according to researchers. “The reasons are clear: users are familiar with these file types, the applications used to open them are ubiquitous, and they are suited to social engineering lures,” he wrote. 


Paying the ransom is not a good recovery strategy

“One of the hallmarks of a strong Modern Data Protection strategy is a commitment to a clear policy that the organization will never pay the ransom, but do everything in its power to prevent, remediate and recover from attacks,” added Allan. “Despite the pervasive and inevitable threat of ransomware, the narrative that businesses are helpless in the face of it is not an accurate one. Educate employees and ensure they practice impeccable digital hygiene; regularly conduct rigorous tests of your data protection solutions and protocols; and create detailed business continuity plans that prepare key stakeholders for worst-case scenarios.” The “attack surface” for criminals is diverse. Cyber-villains most often first gained access to production environments through errant users clicking malicious links, visiting unsecure websites or engaging with phishing emails — again exposing the avoidable nature of many incidents. After having successfully gained access to the environment, there was very little difference in the infection rates between data center servers, remote office platforms and cloud-hosted servers.


Beneath the surface: Uncovering the shift in web skimming

Web skimming typically targets platforms like Magento, PrestaShop, and WordPress, which are popular choices for online shops because of their ease of use and portability with third-party plugins. Unfortunately, these platforms and plugins come with vulnerabilities that the attackers have constantly attempted to leverage. One notable web skimming campaign/group is Magecart, which gained media coverage over the years for affecting thousands of websites, including several popular brands. In one of the campaigns we’ve observed, attackers obfuscated the skimming script by encoding it in PHP, which, in turn, was embedded inside an image file—a likely attempt to leverage PHP calls when a website’s index page is loaded. Recently, we’ve also seen compromised web applications injected with malicious JavaScript masquerading as Google Analytics and Meta Pixel (formerly Facebook Pixel) scripts. Some skimming scripts even had anti-debugging mechanisms, in that they first checked if the browser’s developer tools were open. Given the scale of web skimming campaigns and the impact they have on organizations and their customers, a comprehensive security solution is needed to detect and block this threat.


Next generation PIM: how AI can revolutionise product data

An ideal AI-powered PIM solution addresses a gamut of data management needs that translate to benefits like analysing images and comparing them to product descriptions; translating texts automatically; analysing and comparing data; understanding the statistical rules; and correcting what doesn’t comply with those rules. The use of AI in PIM also helps create a contextualised, straightforward path for businesses to pursue by providing new insights accrued from various products and customer data sets across channels. With the right training, a neural network (deep learning) can be formed to sweep and analyse through metadata pertaining to different data sets for the delivery of accurate results across channels. Thus, it ultimately relieves organisations of time-consuming, repetitive tasks in managing changes or errors in their product data cycles. The role of PIM is constantly evolving; for example, in experiential retail, the PIM system needs to be implemented with AI for human context. Here, there is a change in both sides of the retailer-consumer dynamic, through product information management solutions that are expected to be more open network-oriented with AI.


IT Support for Edge Computing: Strategies to Make it Easier

IT vendors commonly assign account managers to major customer accounts for the purpose of managing relationships. If an issue arises, this account manager “point person” can summon the necessary resources and follow up to see that work and/or support is completed to a satisfactory resolution. IT can profit from the account manager approach with end users, especially if users have an abundance of edge applications and networks. An assigned business analyst who coordinates with tech support and others in IT can be the contact point person for an end-user department whenever a persistent problem occurs. This account manager can also periodically (at least quarterly) visit the user department and review technology performance and IT support. End users are more apt to communicate and cooperate with IT if they know they have someone to go to when they need to escalate an issue. ... There is no area of IT that is more qualified to give insights into how and where networks and systems are failing than technical support. This is because technical support is out there every day hearing about problems from end users, and then trouble-shooting the problems and deducing how they are happening.


How to Run Your Product Department Like a Coach

A key part of this new way of working was something that was drilled into me as an agile coach – keep teams together and give them time “to be teams”. Until this point, teams had formed and disbanded for each project, however, I knew that for us to move faster, the key would be high-performing teams and that takes time. Instead, we would try to keep people together and if needed, change their focus rather than disband them. This has easily been one of the most successful parts of a new way of working I brought to accuRx. As part of this focus, I worked closely with the CTO to establish clear leadership and accountability within each team. We agreed that every team would have a PM/TL (technical lead) pair, with both being held jointly accountable for the team being healthy and effective at delivering at pace. This “leadership in pairs” system has been crucial in allowing us to scale quickly whilst holding ourselves to account. The final piece of the jigsaw was ensuring that I was able to influence (or own) what our organisational structure would look like for Product (and Engineering).


Managed cloud services: 4 things IT leaders should know

Managed cloud services still require some internal expertise if you want to maximize your ROI – they should supercharge the IT team, not take its place. You can certainly use cloud managed services to do more with less – the constant marching order in today’s business world – and attain technological scale that wouldn’t otherwise be possible. But you should still do so in the context of your existing team and future hiring plans. “When developing a cloud-managed service strategy, you need to consider that we are now combining what used to be two separate sides of the house, infrastructure and application development,” DeCurtis notes. DeCurtis notes that skills such infrastructure as code will be essential for complex cloud services environments. If you’re already a mature DevOps shop, then you’re ahead of the game. Other teams may have some learning to do – and leadership may realize that people that can blend once-siloed job functions can be tough to find – though not as impossible as it once seemed. “Fortunately, these roles are becoming more readily available as organizations continue to adopt cloud strategies,” DeCurtis. 


IT risk management best practices for organisations

When we talk about risk, what we really mean is each organisation’s unique set of vulnerabilities. These loopholes are monitored, generically and specifically, by bad actors who would exploit them for financial or political gain, or occasionally just for clout. The first step, then, is to understand centres of risk within your organisation. These evolve with tech advances and behavioural change, for example with the transition to hybrid working brought on by the Covid-19 pandemic. “This has presented new challenges with expanded networks beyond the traditional office environment: no physical barriers or access controls, reduced VPN effectiveness, more endpoints and a greater attack surface to monitor,” says Folliss. “Remote working distorts an IT security team’s ability to manage and control the network and introduces new threats and vulnerabilities – and thus new risk.” So your analysis can’t be a one-off, rather a continuous, rigorous, and honest programme of testing and assessment that gets to the heart of an organisation’s DNA, says Pascal Geenens, director of threat intelligence at Radware.



Quote for the day:

"Confident and courageous leaders have no problems pointing out their own weaknesses and ignorance." -- Thom S. Rainer