Daily Tech Digest - January 17, 2022

Using Event-Driven Architecture With Microservices

The implementation of microservices is more complex than one may first think, exacerbated by the fact that many DevOps teams fall into the trap of making false assumptions about distributed computing. The list of distributed computing fallacies was originally addressed in 1994 by L. Peter Deutsch and others at Sun Microsystems, and still holds true today. There are key fallacies that hold special importance to microservices implementation: reliable, homogenous, and secure networks, latency is zero and transport cost is zero. The smaller you make each microservice, the larger your service count, and the more the fallacies of distributed computing impact stability and user experience/system performance. This makes it mission-critical to establish an architecture and implementation that minimizes latency while handling the realities of network and service outages. ... Microservices require connectivity and data to perform their roles and provide business value, however, data acquisition/communication has been largely ignored and tooling severely lags behind. 


How AI will drive the hybrid work environment

The best way to begin is to establish a strong AI foundation, says Alex Smith, global AI product lead for knowledge work platform iManage. Since AI thrives on data, a central repository for all enterprise data is essential, and this can only be done in the cloud. In a world where access to data must be maintained for workers at home, in the office and anywhere in between, only the cloud has the capacity to deliver such broad connectivity. At the same time, the cloud makes it easier to search and share documents, email and other files, plus it provides advanced security, zero-touch architectures, threat analysis and other means to ensure access to data is managed properly – all of which can be augmented by AI as the data ecosystem scales in both size and complexity. Once this foundation is established, organizations can strategically implement AI across a range of processes to help ensure the work gets done, no matter where the employee is sitting. Knowledge management, for one, benefits tremendously from AI to help identify those with the needed experience and skillsets to accomplish a particular project.


Thousands of enterprise servers are running vulnerable BMCs, researchers find

The iLOBleed implant is suspected to be the creation of an advanced persistent threat (APT) group and has been used since at least 2020. It is believed to exploit known vulnerabilities such as CVE-2018-7078 and CVE-2018-7113 to inject new malicious modules into the iLO firmware that add disk wiping functionality. Once installed, the rootkit also blocks attempts to upgrade the firmware and reports back that the newer version was installed successfully to trick administrators. However, there are ways to tell that the firmware was not upgraded. For example, the login screen in the latest available version should look slightly different. If it doesn't, it means that the update was prevented, even if the firmware reports the latest version. It's also worth noting that infecting the iLO firmware is possible if an attacker gains root (administrator) privileges to the host operating system since this allows flashing the firmware. If the server's iLO firmware has no known vulnerabilities, it is possible to downgrade the firmware to a vulnerable version. 


The End Of Digital Transformation In Banking

Playing a game of catch up, banks and credit unions have accelerated their digital banking transformation efforts. They have invested increasing amounts of capital and human resources into data and advanced analytics, innovation, modern technologies, back-office automation, and a reimagined workforce with a mission to improve the customer experience while reducing the cost to serve. Much of the impetus is because the fintech and big tech competitive landscape continues to expand, offering simple engagement and seamless experiences, causing customers to fragment existing relationships with their existing bank and credit union providers. The good news is that there are a multitude of options available to work with third-party providers that can deploy solutions faster than can be done if developed internally. Incumbent institutions can also partner with fintech and big tech competitors while modernizing their existing systems and processes at the same time. With every financial institution looking to become more digitally future-ready, it is more important than ever to understand the evolving financial industry landscape.


CISO As A Service Or Security Executive On Demand

As a company grows, so do its compliance and security obligations. Having a virtual CISO to turn to when needed can be incredibly helpful and save a company a lot of headaches when trying to navigate an ever-changing world of regulations or keep up with rapidly evolving security threats. In addition, having a vCISO in place can make the compliance process much more manageable. The vCISOs are tailored to each company’s needs. They are professionals with extensive experience in cybersecurity, developing strategies, plans and applying different security methodologies to other organizations. In any case, the specific scope of vCISO services must be customized based on each company’s available internal resources and security needs. Obviously, as with any decision to outsource services, it must be supported by a preliminary analysis that shows that the effort and budgets allocated to information security legal and regulatory compliance are effectively optimized. 


AI to bring massive benefits, but also cause great concern

The powerful lure of harnessing the great power of AI to transform digital technology across the globe may blind users to the necessity of mitigating the accompanying risks of unethical use. The ethical ramifications often start with developers asking ‘can we build’ something novel versus ‘should we build’ something that can be misused in terrible ways. The rush to AI solutions has already created many situations where poor design, inadequate security, or architecture bias manifested unintended consequences that were harmful. AI Ethics frameworks are needed to help guide organizations to act consistently and comprehensively when it comes to product design and operation. Without foresight, proper security controls, and oversight, malicious entities can leverage AI to create entirely new methods of attack which will be far superior to the current defenses. These incidents have the potential to create impacts and losses at a scale matching the benefits AI can bring to society. It is important that AI developers and operators integrate cybersecurity capabilities to predict, prevent, detect, and respond to attacks against AI systems.


Why Is Data Destruction the Best Way to Impede Data Breach Risks?

Secure and certified media wiping helps in eradicating the data completely without leaving any traces behind for compromising the sanctity of the data and the device owner. Formatting and deleting generally allow retrieval of data from empty spaces. Secure data erasure would mean that experts and hackers can retrieve no data even in a laboratory setup. When data is no more usable and serves no purpose, it is known as “data at rest.” This type of data stored on digital devices is prone to malicious attacks. To prevent this data from being accessed, altered or stolen by people with malicious intent, organizations today use measures such as encryption, firewall security, etc. These measures aren’t enough to protect this “data at rest.” Over 70% of breach events come from off-network devices that are at rest. Data destruction is the most secure way to protect such data that is not in use anymore. Devices that are no longer needed are required to be wiped permanently with a certified data sanitization tool using reliable data erasure standards.


Creating Psychological Safety in Your Teams

Successful organisations allow certain mistakes to happen. It is crucial that we distinguish between four types of mistakes and know how to deal with them. This way, we can foster a culture of learning from mistakes. I created the first two mistake types below inspired by the research of Amy Edmondson and the last two mistake types are taken directly from Amy Edmondson’s book “The Fearless Organization”. Unacceptable mistakes: When an employee does not wear a safety helmet in a factory in spite of all the training, resources, support, and help, and suffers an injury, that is an unacceptable failure. Gross misconduct at work can also be an example of an unacceptable mistake. In that case we can respond with a warning or clear sanctions. Improvable mistakes: Putting a product or a service in front of our customers to find out its shortcomings and get customer feedback is an example of an improvable mistake. The idea is to learn areas of improvement of that product or service in an effort to make it better. Complex mistakes: These are caused by unfamiliar factors in a familiar context, such as a severe flooding of a metro station due to a superstorm.


Ransomware is being rewritten in Go for joint attacks on Windows, Linux users

Despite having the ability to target users on a cross-platform basis, Crowdstrike said the vast majority (91%) of malware written in Golang targets Windows users - due to it market share, 8% is targeting users on macOS and just 1% of malware seeks to infect Linux machines. Pivoting to Golang is also an attractive proposition given that it performs around 40 times faster than optimised Python code. Golang can run more functions than C++, for example, which makes for a more effective product that can be more difficult to analyse. "Portability in malware means the expansion of the addressable market, in other words who might become a source of money," said Andy Norton, European cyber risk officer at Armis, speaking to IT Pro. "This isn’t the first time we've seen a shift towards more portable Malware; a few years ago we saw a change towards Java-based remote access trojans away from .exe Windows-centric payloads.


Developers and users need to focus on the strengths of different blockchains to maximize benefits

As more blockchains and decentralised finance (DeFi) protocols appear, it is important that governance systems are understood, ensuring that rules are agreed and followed, thereby encouraging transparency. Within the framework of traditional companies, those with leadership roles collectively govern. This differs from public blockchains that either use direct governance, representative governance, or a combination of both. Whilst Bitcoin is run by an external foundation, other developers – such as Ripple – are governed by a company. Algorand, meanwhile, is an example of a blockchain with a seemingly more democratic approach to governance, allowing all members to discuss and make suggestions. Ethereum has a voting system in place, whereby users must spend 0.06 to 0.08 of an Ether to cast a vote. Some governance methods have received criticism. For example, the “veto mechanism” within the Bitcoin core team has raised concerns that miners are given more power to make decisions than everyday users.



Quote for the day:

"If you're relying on luck, you have already given up." -- Gordon Tredgold

No comments:

Post a Comment