Daily Tech Digest - January 17, 2022

Using Event-Driven Architecture With Microservices

The implementation of microservices is more complex than one may first think, exacerbated by the fact that many DevOps teams fall into the trap of making false assumptions about distributed computing. The list of distributed computing fallacies was originally addressed in 1994 by L. Peter Deutsch and others at Sun Microsystems, and still holds true today. There are key fallacies that hold special importance to microservices implementation: reliable, homogenous, and secure networks, latency is zero and transport cost is zero. The smaller you make each microservice, the larger your service count, and the more the fallacies of distributed computing impact stability and user experience/system performance. This makes it mission-critical to establish an architecture and implementation that minimizes latency while handling the realities of network and service outages. ... Microservices require connectivity and data to perform their roles and provide business value, however, data acquisition/communication has been largely ignored and tooling severely lags behind. 


How AI will drive the hybrid work environment

The best way to begin is to establish a strong AI foundation, says Alex Smith, global AI product lead for knowledge work platform iManage. Since AI thrives on data, a central repository for all enterprise data is essential, and this can only be done in the cloud. In a world where access to data must be maintained for workers at home, in the office and anywhere in between, only the cloud has the capacity to deliver such broad connectivity. At the same time, the cloud makes it easier to search and share documents, email and other files, plus it provides advanced security, zero-touch architectures, threat analysis and other means to ensure access to data is managed properly – all of which can be augmented by AI as the data ecosystem scales in both size and complexity. Once this foundation is established, organizations can strategically implement AI across a range of processes to help ensure the work gets done, no matter where the employee is sitting. Knowledge management, for one, benefits tremendously from AI to help identify those with the needed experience and skillsets to accomplish a particular project.


Thousands of enterprise servers are running vulnerable BMCs, researchers find

The iLOBleed implant is suspected to be the creation of an advanced persistent threat (APT) group and has been used since at least 2020. It is believed to exploit known vulnerabilities such as CVE-2018-7078 and CVE-2018-7113 to inject new malicious modules into the iLO firmware that add disk wiping functionality. Once installed, the rootkit also blocks attempts to upgrade the firmware and reports back that the newer version was installed successfully to trick administrators. However, there are ways to tell that the firmware was not upgraded. For example, the login screen in the latest available version should look slightly different. If it doesn't, it means that the update was prevented, even if the firmware reports the latest version. It's also worth noting that infecting the iLO firmware is possible if an attacker gains root (administrator) privileges to the host operating system since this allows flashing the firmware. If the server's iLO firmware has no known vulnerabilities, it is possible to downgrade the firmware to a vulnerable version. 


The End Of Digital Transformation In Banking

Playing a game of catch up, banks and credit unions have accelerated their digital banking transformation efforts. They have invested increasing amounts of capital and human resources into data and advanced analytics, innovation, modern technologies, back-office automation, and a reimagined workforce with a mission to improve the customer experience while reducing the cost to serve. Much of the impetus is because the fintech and big tech competitive landscape continues to expand, offering simple engagement and seamless experiences, causing customers to fragment existing relationships with their existing bank and credit union providers. The good news is that there are a multitude of options available to work with third-party providers that can deploy solutions faster than can be done if developed internally. Incumbent institutions can also partner with fintech and big tech competitors while modernizing their existing systems and processes at the same time. With every financial institution looking to become more digitally future-ready, it is more important than ever to understand the evolving financial industry landscape.


CISO As A Service Or Security Executive On Demand

As a company grows, so do its compliance and security obligations. Having a virtual CISO to turn to when needed can be incredibly helpful and save a company a lot of headaches when trying to navigate an ever-changing world of regulations or keep up with rapidly evolving security threats. In addition, having a vCISO in place can make the compliance process much more manageable. The vCISOs are tailored to each company’s needs. They are professionals with extensive experience in cybersecurity, developing strategies, plans and applying different security methodologies to other organizations. In any case, the specific scope of vCISO services must be customized based on each company’s available internal resources and security needs. Obviously, as with any decision to outsource services, it must be supported by a preliminary analysis that shows that the effort and budgets allocated to information security legal and regulatory compliance are effectively optimized. 


AI to bring massive benefits, but also cause great concern

The powerful lure of harnessing the great power of AI to transform digital technology across the globe may blind users to the necessity of mitigating the accompanying risks of unethical use. The ethical ramifications often start with developers asking ‘can we build’ something novel versus ‘should we build’ something that can be misused in terrible ways. The rush to AI solutions has already created many situations where poor design, inadequate security, or architecture bias manifested unintended consequences that were harmful. AI Ethics frameworks are needed to help guide organizations to act consistently and comprehensively when it comes to product design and operation. Without foresight, proper security controls, and oversight, malicious entities can leverage AI to create entirely new methods of attack which will be far superior to the current defenses. These incidents have the potential to create impacts and losses at a scale matching the benefits AI can bring to society. It is important that AI developers and operators integrate cybersecurity capabilities to predict, prevent, detect, and respond to attacks against AI systems.


Why Is Data Destruction the Best Way to Impede Data Breach Risks?

Secure and certified media wiping helps in eradicating the data completely without leaving any traces behind for compromising the sanctity of the data and the device owner. Formatting and deleting generally allow retrieval of data from empty spaces. Secure data erasure would mean that experts and hackers can retrieve no data even in a laboratory setup. When data is no more usable and serves no purpose, it is known as “data at rest.” This type of data stored on digital devices is prone to malicious attacks. To prevent this data from being accessed, altered or stolen by people with malicious intent, organizations today use measures such as encryption, firewall security, etc. These measures aren’t enough to protect this “data at rest.” Over 70% of breach events come from off-network devices that are at rest. Data destruction is the most secure way to protect such data that is not in use anymore. Devices that are no longer needed are required to be wiped permanently with a certified data sanitization tool using reliable data erasure standards.


Creating Psychological Safety in Your Teams

Successful organisations allow certain mistakes to happen. It is crucial that we distinguish between four types of mistakes and know how to deal with them. This way, we can foster a culture of learning from mistakes. I created the first two mistake types below inspired by the research of Amy Edmondson and the last two mistake types are taken directly from Amy Edmondson’s book “The Fearless Organization”. Unacceptable mistakes: When an employee does not wear a safety helmet in a factory in spite of all the training, resources, support, and help, and suffers an injury, that is an unacceptable failure. Gross misconduct at work can also be an example of an unacceptable mistake. In that case we can respond with a warning or clear sanctions. Improvable mistakes: Putting a product or a service in front of our customers to find out its shortcomings and get customer feedback is an example of an improvable mistake. The idea is to learn areas of improvement of that product or service in an effort to make it better. Complex mistakes: These are caused by unfamiliar factors in a familiar context, such as a severe flooding of a metro station due to a superstorm.


Ransomware is being rewritten in Go for joint attacks on Windows, Linux users

Despite having the ability to target users on a cross-platform basis, Crowdstrike said the vast majority (91%) of malware written in Golang targets Windows users - due to it market share, 8% is targeting users on macOS and just 1% of malware seeks to infect Linux machines. Pivoting to Golang is also an attractive proposition given that it performs around 40 times faster than optimised Python code. Golang can run more functions than C++, for example, which makes for a more effective product that can be more difficult to analyse. "Portability in malware means the expansion of the addressable market, in other words who might become a source of money," said Andy Norton, European cyber risk officer at Armis, speaking to IT Pro. "This isn’t the first time we've seen a shift towards more portable Malware; a few years ago we saw a change towards Java-based remote access trojans away from .exe Windows-centric payloads.


Developers and users need to focus on the strengths of different blockchains to maximize benefits

As more blockchains and decentralised finance (DeFi) protocols appear, it is important that governance systems are understood, ensuring that rules are agreed and followed, thereby encouraging transparency. Within the framework of traditional companies, those with leadership roles collectively govern. This differs from public blockchains that either use direct governance, representative governance, or a combination of both. Whilst Bitcoin is run by an external foundation, other developers – such as Ripple – are governed by a company. Algorand, meanwhile, is an example of a blockchain with a seemingly more democratic approach to governance, allowing all members to discuss and make suggestions. Ethereum has a voting system in place, whereby users must spend 0.06 to 0.08 of an Ether to cast a vote. Some governance methods have received criticism. For example, the “veto mechanism” within the Bitcoin core team has raised concerns that miners are given more power to make decisions than everyday users.



Quote for the day:

"If you're relying on luck, you have already given up." -- Gordon Tredgold

Daily Tech Digest - January 16, 2022

Will blockchain fulfil its democratic promise or will it become a tool of big tech?

It’s easy to see why the blockchain idea evokes utopian hopes: at last, technology is sticking it to the Man. In that sense, the excitement surrounding it reminds me of the early days of the internet, when we really believed that our contemporaries had invented a technology that was democratising and liberating and beyond the reach of established power structures. ... What we underestimated, in our naivety, were the power of sovereign states, the ruthlessness and capacity of corporations and the passivity of consumers, a combination of which eventually led to corporate capture of the internet and the centralisation of digital power in the hands of a few giant corporations and national governments. ... Will this happen to blockchain technology? Hopefully not, but the enthusiastic endorsement of it by outfits such as Goldman Sachs is not exactly reassuring. The problem with digital technology is that, for engineers, it is both intrinsically fascinating and seductively challenging, which means that they acquire a kind of tunnel vision: they are so focused on finding solutions to the technical problems that they are blinded to the wider context.


Ultra-Long Battery Life Is Coming … Eventually

Experts say battery life is getting better in consumer electronics—through a combination of super-efficient processors, low-power states, and a little help from advanced technologies like silicon anode. It’s just not necessarily getting 10 times better. Conventional lithium-ion batteries have their energy density limits, and they typically improve by single-digit percentages each year. And there are downsides to pushing the limits of energy density. “Batteries are getting a little bit better, but when batteries get better in energy density, there’s usually a trade-off with cycle life,” says Venkat Srinivasan, who researches energy storage and is the director of the Argonne Collaborative Center for Energy Storage Science. “If you go to the big consumer electronics companies, they’ll have a metric they want to achieve, like we need the battery to last for 500 cycles over two or three years. But some of the smaller companies might opt for longer run times, and live with the fact that the product might not last two years.”


7 obstacles that organizations face migrating legacy data to the cloud

Asked why they're looking to move their legacy data off-premises and to the cloud, 46% of the executives cited regulatory compliance as the top reason. Some 38.5% pointed to cost savings as the biggest reason, while 8.5% mentioned business intelligence and analytics. The survey also asked respondents to identify the features and benefits that would most influence them to move their legacy data to the cloud. The major benefit cited by 66% was the integration of data and legacy archives. Some 59% cited the cloud as a way to centrally manage the archiving of all data including data from Office 365. Other reasons mentioned included data security and encryption, advanced records management, artificial intelligence-powered regulatory and compliance checking, and fast and accurate centralized search. Of course, anxiety over cyber threats and cyberattacks also plays a role in the decision to migrate legacy data. Among the respondents, 42% said that concerns over cybersecurity and ransomware attacks slightly or significantly accelerated the migration plans.


View cloud architecture through a new optimization lens

IT and enterprise management in general is getting wise to the fact that a solution that “works” or “seems innovative” does not really tell you why operations cost so much more than forecast. Today we need to audit and evaluate the end state of a cloud solution to provide a clear measure of its success. The planning and development phases of a cloud deployment are great places to plan and build in audit and evaluation procedures that will take place post-development to gauge the project’s overall ROI. This end-to-beginning view will cause some disturbance in the world of those who build and deploy cloud and cloud-related solutions. Most believe their designs and builds are cutting edge and built with the best possible solutions available at the time. They believe their designs are as optimized as possible. In most instances, they’re wrong. Most cloud solutions implemented during the past 10 years are grossly underoptimized. So much so that if companies did an honest audit of what was deployed versus what should have been deployed, a very different picture of a truly optimized cloud solution would take shape.


How Blockchain Startups Think about Databases and dApp Efficiency

When applications are built on top of a blockchain, these applications are inherently decentralized — hence referred to as dApps (decentralized applications). Most dApps today leverage a Layer 1 (L1) blockchain technology like Ethereum as their primary form of storage for transactions. There are two primary ways that dApps interact with the underlying blockchain: reads and writes. Let’s use an NFT and gaming dApp that rewards gamers who win coins that they can then use to purchase NFTs as an example: Writes are performed to an L1 chain whenever a gamer wins and coins are added to their wallet; reads are performed when a gamer logs into the game and needs to pull the associated NFT metadata for their game character (think stats, ranking, etc.). As an early-stage dApp building the game described above, writing directly to Ethereum is prohibitive because of slow performance (impacting latency) and high cost. To help developers in the dApp ecosystem, sidechains and Layer 2 (L2) solutions like Polygon improve performance. 


Google calls for new government action to protect open-source software projects

“We need a public-private partnership to identify a list of critical open source projects — with criticality determined based on the influence and importance of a project — to help prioritize and allocate resources for the most essential security assessments and improvements,” Walker wrote. The blog post also called for an increase in public and private investment to keep the open-source ecosystem secure, particularly when the software is used in infrastructure projects. For the most part, funding and review of such projects are conducted by the private sector. The White House had not responded to a request for comment by time of publication. “Open source software code is available to the public, free for anyone to use, modify, or inspect ... That’s why many aspects of critical infrastructure and national security systems incorporate it,” wrote Walker. “But there’s no official resource allocation and few formal requirements or standards for maintaining the security of that critical code. In fact, most of the work to maintain and enhance the security of open source, including fixing known vulnerabilities, is done on an ad hoc, volunteer basis.”


How AI Can Improve Software Development

By leveraging AI to automate the identification of the specific lines of code that require attention, developers can simply ask this AI-driven knowledge repository where behaviors are coming from—and quickly identify the code associated with that behavior. This puts AI squarely in the position of intelligence augmentation, which is key to leveraging its capabilities. This novel approach of AI reinterprets what the computation represents and converts it into concepts, therefore “thinking” about the code in the same way humans do. The result is that software developers no longer have to unearth the intent of previous developers encoded in the software to find potential bugs. Even better, developers are able to overcome the inadequacies of automated testing by using AI to validate that they haven’t broken the system before they compile or check in the code. The AI will forward simulate the change and determine whether it’s isolated to the behavior under change. The result is the bounds of the change are confined to the behavior under change so that no unintended consequences arise.


A busy year ahead in low-code and no-code development

There's logic to developers embracing low-code and no-code methodologies. "Developers love to code, but what they love more is to create, regardless the language," says Steve Peak, founder of Story.ai. "Developers are always seeking new tools to create faster and with more enjoyment. Once low and no code grows into a tool that developers have more control over what they truly need to get done; they unquestionably will use them. It helps them by getting work done quicker with more enjoyment, examples of this are everywhere and are engrained into most developers. A seek for the next, better thing." At the same time, there is still much work to be done -- by professional developers, of course -- before true low-code or no-code capabilities are a reality. "Even the most popular tools in the market requite significant API knowledge and most likely JavaScript experience," says Peak. "The products that do not require API or JavaScript experience are limited in functionality and often resemble that of custom Kanban boards and more media rich spreadsheets wherein information logic is mostly entirely absent."


The Future of the Metaverse + AI and Data Looks Bright

The next generation of VR headsets will collect more user information, including detecting the stress level of the user, and even facial recognition. “We’re going to see more capabilities and really understanding the biometrics that are generated from an individual, and be able to use that to enhance the training experience,” he says. That data collection will enable a feedback loop with the VR user. For example, if an enterprise is using VR to simulate a lineman repairing a high-voltage wire, the headset will be able to detect the anxiety level of the user. That information will inform the enterprise how to personalize the next set of VR lessons for the employee, Eckert says. “Remember, you’re running nothing more than software on a digital device, but because it senses three dimensions, you can put input through gesture hand control, through how you gaze, where you gaze. It’s collecting data,” he says. “Now that data can then be acted upon to create that feedback loop. And that’s why I think it’s so important. In this immersive world that we have, that feedback …will make it even that much more realistic of an experience.”


Data Engineering and Analytics: The End of (The End Of) ETL

Data virtualization does not purport to eliminate the requirement to transform data. In fact, most DV implementations permit developers, modelers, etc., to specify and apply different types of transformations to data at runtime. Does DAF? That is, how likely is it that any scheme can eliminate the requirement to transform data? Not very likely at all. Data transformation is never an end unto itself. It is rather a means to the end of using data, of doing stuff with data. ... Because this trope is so common, technology buyers should be savvy enough not to succumb to it. Yet, as the evidence of four decades of technology buying demonstrates, succumb to it they do. This problem is exacerbated in any context in which (as now) the availability of new, as-yet-untested technologies fuels optimism among sellers and buyers alike. Cloud, ML and AI are the dei ex machina of our age, contributing to a built-in tolerance for what amounts to utopian technological messaging. That is, people not only want to believe in utopia -- who wouldn’t wish away the most intractable of sociotechnical problems? -- but are predisposed to do so.



Quote for the day:

"Authority without wisdom is like a heavy axe without an edge, fitter to bruise than polish." -- Anne Bradstreet

Daily Tech Digest - January 15, 2022

Open source and mental health: the biggest challenges facing developers

The very nature of open source projects means its products are readily available and ripe for use. Technological freedom is something to be celebrated. However, it should not come at the expense of an individual’s mental health. Open source is set up for collaboration. But in reality, a collaborative approach does not always materialise. The accessibility of these projects means that many effective pieces of coding start as small ventures by individual developers, only to snowball into substantial projects on which companies rely, but rarely contribute back to it. Open source is for everyone, but responsibility comes along with that. If we want open source projects to stay around, any company using open source projects should dedicate some substantial time contributing back to open source projects, avoiding unreasonable strain on individual developers by doing so. Sadly, 45% of developers report a lack of support with their open source work. Without sufficient support, the workload to maintain such projects can place developers under enormous pressure, reducing confidence in their ability and increasing anxiety.


Chaos Engineering - The Practice Behind Controlling Chaos

I always tell people that Chaos Engineering is a bit of a misnomer because it’s actually as far from chaotic as you can get. When performed correctly everything is in control of the operator. That mentality is the reason our core product principles at Gremlin are: safety, simplicity and security. True chaos can be daunting and can cause harm. But controlled chaos fosters confidence in the resilience of systems and allows for operators to sleep a little easier knowing they’ve tested their assumptions. After all, the laws of entropy guarantee the world will consistently keep throwing randomness at you and your systems. You shouldn’t have to help with that. One of the most common questions I receive is: “I want to get started with Chaos Engineering, where do I begin?” There is no one size fits all answer unfortunately. You could start by validating your observability tooling, ensuring auto-scaling works, testing failover conditions, or one of a myriad of other use cases. The one thing that does apply across all of these use cases is start slow, but do not be slow to start.


How to ward off the Great Resignation in financial services IT

The upshot for CIOs in financial services: You must adapt to recruit and keep talent – and build a culture that retains industry-leading talent. After recently interviewing more than 20 former financial services IT leaders who departed for other companies, I learned that it isn’t about a bad boss or poor pay. They all fondly remembered their time at the firms, yet that wasn’t enough to keep them. ... It is a journey that begins with small steps. Find something small to prove out and get teams to start working in this new way. Build a contest for ideas – assign numbers to submissions so executives have no idea who or what level submitted, and put money behind it. Have your teams vote on the training offered. This allows them to become an active participant and feel their opinions matter. It can also improve the perception that the importance of technology is prioritized as you give access to not only learn new technologies but encourage teams to learn. ... The better these leaders work together, the more that impact, feeling of involvement, and innovation across teams can grow. 


DataOps or Data Fabric: Which Should Your Business Adopt First?

Every organization is unique, so every Data Strategy is equally unique. There are benefits to both approaches that organizations can adopt, although starting with a DataOps approach is likely to show the largest benefit in the shortest amount of time. DataOps and data fabric both correlate to maturity. It’s best to implement DataOps first if your enterprise has identified setbacks and roadblocks with data and analytics across the organization. DataOps can help streamline the manual processes or fragile integration points enterprises and data teams experience daily. If your organization’s data delivery process is slow to reach customers, then a more flexible, rapid, and reliable data delivery method may be necessary, signifying an organization may need to add on a data fabric approach. Adding elements of a data fabric is a sign that the organization has reached a high level of maturity in its data projects. However, an organization should start with implementing a data fabric over DataOps if they have many different and unique integration styles, and more sources and needs than traditional Data Management can address.


How to Repurpose an Obsolete On-Premises Data Center

Once a data center has been decommissioned, remaining servers and storage resources can be repurposed for applications further down the chain of business criticality. “Servers that no longer offer critical core functions may still serve other departments within the organization as backups,” Carlini says. Administrators can then migrate less important applications to the older hardware and the IT hardware itself can be located, powered, and cooled in a less redundant and secure way. “The older hardware can continue on as backup/recovery systems, or spare systems that are ready for use should the main cloud-based systems go off-line,” he suggests. Besides reducing the need to purchase new hardware, reassigning last-generation data center equipment within the organization also raises the enterprise's green profile. It shows that the enterprise cares about the environment and doesn’t want to add to the already existing data equipment in data centers, says Ruben Gamez CEO of electronic signature tool developer SignWell. “It's also very sustainable.”


Mitigating Insider Security Threats with Zero Trust

Zero Trust aims at minimising lateral movement of attacks in an organisation, which is the most common cause of threat duplication or spread of malwares and viruses. In expeditions during organising capture the flag events, we often give exercises to work with metasploits, DDos attacks and understanding attack vectors and how attacks move. For example, a phishing email attack targeting a user was used which had a false memo that was instructed to be forwarded by each employee to their peers. That email had MS powershell malware embedded and it was used to depict how often good looking emails are too good to be genuine. And since, just like that, the attack vectors are often targeted to be inside of organisations, Zero Trust suggests to always verify all network borders with equal scrutiny. Now, as with every new technology, Zero Trust is not built in a day, so it might sound like a lot of work for many small businesses as security sometimes comes across as an expensive investment. 


Trends in Blockchain for 2022

Blockchain is ushering in major economic shifts. But the cryptocurrency market is still a ‘wild west’ with little regulation. According to recent reports, it appears the U.S. Securities and Exchange Commission is gearing up to more closely regulate the cryptocurrency industry in 2022. “More investment in blockchain is bringing it into the mainstream, but what’s holding back a lot of adoption is regulatory uncertainty,” said Parlikar. Forbes similarly reports regulatory uncertainty as the biggest challenge facing blockchain entrepreneurs. Blockchain is no longer relegated to the startup domain, either; well-established financial institutions also want to participate in the massive prosperity, said Parlikar. This excitement is causing a development-first, law-later mindset, similar to the legal grey area that followed Uber as it first expanded its rideshare business. “[Blockchain] businesses are trying to hedge risk,” Parlikar explained. “We want to comply and aren’t doing nefarious things intentionally—there’s just a tremendous opportunity to innovate and streamline operations and increase the end-user experience.”


New Vulnerabilities Highlight Risks of Trust in Public Cloud

The most significant of the two vulnerabilities occurred in AWS Glue, a serverless integration service that allows AWS users to manage, clean, and transform data, and makes the datastore available to the user's other services. Using this flaw, attackers could compromise the service and become an administrator — and because the Glue service is trusted, they could use their role to access other users' environments. The exploit allowed Orca's researchers to "escalate privileges within the account to the point where we had unrestricted access to all resources for the service in the region, including full administrative privileges," the company stated in its advisory. Orca's researchers could assume roles in other AWS customers' accounts that have a trusted relationship with the Glue service. Orca maintains that every account that uses the Glue service has at least one role that trusts the Glue service. A second vulnerability in the CloudFormation (CF) service, which allows users to provision resources and cloud assets, allowed the researchers to compromise a CF server and run as an AWS infrastructure service.


Why Saying Digital Transformation Is No Longer Right

Technology is multiplicative, it doesn't know whether it's multiplying a positive or a negative. So, if you have bad customer service at the front counter, and you add technological enablement - voila! You're now able to deliver bad service faster, and to more people than ever before! The term ‘Digital Transformation’ implies a potentially perilous approach of focusing on technology first. In my career as a technology professional, I’ve seen my share of project successes and failures. The key differentiator between success and failure is the clarity of the desired outcome right from the start of the initiative. I had a colleague who used to say: “Projects fail at the start, most people only notice at the end.” Looking back at the successful initiatives which I was a part of, they possessed several common key ingredients: the clarity of a compelling goal, the engagement of people, and a discipline for designing enablement processes. With those ingredients in place, a simple, and reliable enabling tool (the technology), developed using clear requirements acts like an unbelievable accelerant.


Four key lessons for overhauling IT management using enterprise AI

One of the greatest challenges for CIOs and IT leaders these days is managing tech assets that are spread across the globe geographically and across the internet on multi-cloud environments. On one hand, there’s pressure to increase access for those people who need to be on your network via their computers, smartphones and other devices. On the other hand, each internet-connected device is another asset to be monitored and updated, a potential new entry point for bad actors, etc. That’s where the scalability of automation and machine learning is essential. As your organisation grows and becomes more spread out, there’s no need to expand your IT department. A unified IT management system, powered by AI, will keep communication lines open while continually alerting you to threats, triggering appropriate responses to input and making updates across the organisation. It is never distracted or overworked. ... When it comes to these enterprise AI solutions, integration can be more challenging. And in some cases, businesses end up spending as much on customising the solution as they did on the initial investment.



Quote for the day:

"Strong leaders encourage you to do things for your own benefit, not just theirs." -- Tim Tebow