Daily Tech Digest - January 17, 2022

Using Event-Driven Architecture With Microservices

The implementation of microservices is more complex than one may first think, exacerbated by the fact that many DevOps teams fall into the trap of making false assumptions about distributed computing. The list of distributed computing fallacies was originally addressed in 1994 by L. Peter Deutsch and others at Sun Microsystems, and still holds true today. There are key fallacies that hold special importance to microservices implementation: reliable, homogenous, and secure networks, latency is zero and transport cost is zero. The smaller you make each microservice, the larger your service count, and the more the fallacies of distributed computing impact stability and user experience/system performance. This makes it mission-critical to establish an architecture and implementation that minimizes latency while handling the realities of network and service outages. ... Microservices require connectivity and data to perform their roles and provide business value, however, data acquisition/communication has been largely ignored and tooling severely lags behind. 


How AI will drive the hybrid work environment

The best way to begin is to establish a strong AI foundation, says Alex Smith, global AI product lead for knowledge work platform iManage. Since AI thrives on data, a central repository for all enterprise data is essential, and this can only be done in the cloud. In a world where access to data must be maintained for workers at home, in the office and anywhere in between, only the cloud has the capacity to deliver such broad connectivity. At the same time, the cloud makes it easier to search and share documents, email and other files, plus it provides advanced security, zero-touch architectures, threat analysis and other means to ensure access to data is managed properly – all of which can be augmented by AI as the data ecosystem scales in both size and complexity. Once this foundation is established, organizations can strategically implement AI across a range of processes to help ensure the work gets done, no matter where the employee is sitting. Knowledge management, for one, benefits tremendously from AI to help identify those with the needed experience and skillsets to accomplish a particular project.


Thousands of enterprise servers are running vulnerable BMCs, researchers find

The iLOBleed implant is suspected to be the creation of an advanced persistent threat (APT) group and has been used since at least 2020. It is believed to exploit known vulnerabilities such as CVE-2018-7078 and CVE-2018-7113 to inject new malicious modules into the iLO firmware that add disk wiping functionality. Once installed, the rootkit also blocks attempts to upgrade the firmware and reports back that the newer version was installed successfully to trick administrators. However, there are ways to tell that the firmware was not upgraded. For example, the login screen in the latest available version should look slightly different. If it doesn't, it means that the update was prevented, even if the firmware reports the latest version. It's also worth noting that infecting the iLO firmware is possible if an attacker gains root (administrator) privileges to the host operating system since this allows flashing the firmware. If the server's iLO firmware has no known vulnerabilities, it is possible to downgrade the firmware to a vulnerable version. 


The End Of Digital Transformation In Banking

Playing a game of catch up, banks and credit unions have accelerated their digital banking transformation efforts. They have invested increasing amounts of capital and human resources into data and advanced analytics, innovation, modern technologies, back-office automation, and a reimagined workforce with a mission to improve the customer experience while reducing the cost to serve. Much of the impetus is because the fintech and big tech competitive landscape continues to expand, offering simple engagement and seamless experiences, causing customers to fragment existing relationships with their existing bank and credit union providers. The good news is that there are a multitude of options available to work with third-party providers that can deploy solutions faster than can be done if developed internally. Incumbent institutions can also partner with fintech and big tech competitors while modernizing their existing systems and processes at the same time. With every financial institution looking to become more digitally future-ready, it is more important than ever to understand the evolving financial industry landscape.


CISO As A Service Or Security Executive On Demand

As a company grows, so do its compliance and security obligations. Having a virtual CISO to turn to when needed can be incredibly helpful and save a company a lot of headaches when trying to navigate an ever-changing world of regulations or keep up with rapidly evolving security threats. In addition, having a vCISO in place can make the compliance process much more manageable. The vCISOs are tailored to each company’s needs. They are professionals with extensive experience in cybersecurity, developing strategies, plans and applying different security methodologies to other organizations. In any case, the specific scope of vCISO services must be customized based on each company’s available internal resources and security needs. Obviously, as with any decision to outsource services, it must be supported by a preliminary analysis that shows that the effort and budgets allocated to information security legal and regulatory compliance are effectively optimized. 


AI to bring massive benefits, but also cause great concern

The powerful lure of harnessing the great power of AI to transform digital technology across the globe may blind users to the necessity of mitigating the accompanying risks of unethical use. The ethical ramifications often start with developers asking ‘can we build’ something novel versus ‘should we build’ something that can be misused in terrible ways. The rush to AI solutions has already created many situations where poor design, inadequate security, or architecture bias manifested unintended consequences that were harmful. AI Ethics frameworks are needed to help guide organizations to act consistently and comprehensively when it comes to product design and operation. Without foresight, proper security controls, and oversight, malicious entities can leverage AI to create entirely new methods of attack which will be far superior to the current defenses. These incidents have the potential to create impacts and losses at a scale matching the benefits AI can bring to society. It is important that AI developers and operators integrate cybersecurity capabilities to predict, prevent, detect, and respond to attacks against AI systems.


Why Is Data Destruction the Best Way to Impede Data Breach Risks?

Secure and certified media wiping helps in eradicating the data completely without leaving any traces behind for compromising the sanctity of the data and the device owner. Formatting and deleting generally allow retrieval of data from empty spaces. Secure data erasure would mean that experts and hackers can retrieve no data even in a laboratory setup. When data is no more usable and serves no purpose, it is known as “data at rest.” This type of data stored on digital devices is prone to malicious attacks. To prevent this data from being accessed, altered or stolen by people with malicious intent, organizations today use measures such as encryption, firewall security, etc. These measures aren’t enough to protect this “data at rest.” Over 70% of breach events come from off-network devices that are at rest. Data destruction is the most secure way to protect such data that is not in use anymore. Devices that are no longer needed are required to be wiped permanently with a certified data sanitization tool using reliable data erasure standards.


Creating Psychological Safety in Your Teams

Successful organisations allow certain mistakes to happen. It is crucial that we distinguish between four types of mistakes and know how to deal with them. This way, we can foster a culture of learning from mistakes. I created the first two mistake types below inspired by the research of Amy Edmondson and the last two mistake types are taken directly from Amy Edmondson’s book “The Fearless Organization”. Unacceptable mistakes: When an employee does not wear a safety helmet in a factory in spite of all the training, resources, support, and help, and suffers an injury, that is an unacceptable failure. Gross misconduct at work can also be an example of an unacceptable mistake. In that case we can respond with a warning or clear sanctions. Improvable mistakes: Putting a product or a service in front of our customers to find out its shortcomings and get customer feedback is an example of an improvable mistake. The idea is to learn areas of improvement of that product or service in an effort to make it better. Complex mistakes: These are caused by unfamiliar factors in a familiar context, such as a severe flooding of a metro station due to a superstorm.


Ransomware is being rewritten in Go for joint attacks on Windows, Linux users

Despite having the ability to target users on a cross-platform basis, Crowdstrike said the vast majority (91%) of malware written in Golang targets Windows users - due to it market share, 8% is targeting users on macOS and just 1% of malware seeks to infect Linux machines. Pivoting to Golang is also an attractive proposition given that it performs around 40 times faster than optimised Python code. Golang can run more functions than C++, for example, which makes for a more effective product that can be more difficult to analyse. "Portability in malware means the expansion of the addressable market, in other words who might become a source of money," said Andy Norton, European cyber risk officer at Armis, speaking to IT Pro. "This isn’t the first time we've seen a shift towards more portable Malware; a few years ago we saw a change towards Java-based remote access trojans away from .exe Windows-centric payloads.


Developers and users need to focus on the strengths of different blockchains to maximize benefits

As more blockchains and decentralised finance (DeFi) protocols appear, it is important that governance systems are understood, ensuring that rules are agreed and followed, thereby encouraging transparency. Within the framework of traditional companies, those with leadership roles collectively govern. This differs from public blockchains that either use direct governance, representative governance, or a combination of both. Whilst Bitcoin is run by an external foundation, other developers – such as Ripple – are governed by a company. Algorand, meanwhile, is an example of a blockchain with a seemingly more democratic approach to governance, allowing all members to discuss and make suggestions. Ethereum has a voting system in place, whereby users must spend 0.06 to 0.08 of an Ether to cast a vote. Some governance methods have received criticism. For example, the “veto mechanism” within the Bitcoin core team has raised concerns that miners are given more power to make decisions than everyday users.



Quote for the day:

"If you're relying on luck, you have already given up." -- Gordon Tredgold

Daily Tech Digest - January 16, 2022

Will blockchain fulfil its democratic promise or will it become a tool of big tech?

It’s easy to see why the blockchain idea evokes utopian hopes: at last, technology is sticking it to the Man. In that sense, the excitement surrounding it reminds me of the early days of the internet, when we really believed that our contemporaries had invented a technology that was democratising and liberating and beyond the reach of established power structures. ... What we underestimated, in our naivety, were the power of sovereign states, the ruthlessness and capacity of corporations and the passivity of consumers, a combination of which eventually led to corporate capture of the internet and the centralisation of digital power in the hands of a few giant corporations and national governments. ... Will this happen to blockchain technology? Hopefully not, but the enthusiastic endorsement of it by outfits such as Goldman Sachs is not exactly reassuring. The problem with digital technology is that, for engineers, it is both intrinsically fascinating and seductively challenging, which means that they acquire a kind of tunnel vision: they are so focused on finding solutions to the technical problems that they are blinded to the wider context.


Ultra-Long Battery Life Is Coming … Eventually

Experts say battery life is getting better in consumer electronics—through a combination of super-efficient processors, low-power states, and a little help from advanced technologies like silicon anode. It’s just not necessarily getting 10 times better. Conventional lithium-ion batteries have their energy density limits, and they typically improve by single-digit percentages each year. And there are downsides to pushing the limits of energy density. “Batteries are getting a little bit better, but when batteries get better in energy density, there’s usually a trade-off with cycle life,” says Venkat Srinivasan, who researches energy storage and is the director of the Argonne Collaborative Center for Energy Storage Science. “If you go to the big consumer electronics companies, they’ll have a metric they want to achieve, like we need the battery to last for 500 cycles over two or three years. But some of the smaller companies might opt for longer run times, and live with the fact that the product might not last two years.”


7 obstacles that organizations face migrating legacy data to the cloud

Asked why they're looking to move their legacy data off-premises and to the cloud, 46% of the executives cited regulatory compliance as the top reason. Some 38.5% pointed to cost savings as the biggest reason, while 8.5% mentioned business intelligence and analytics. The survey also asked respondents to identify the features and benefits that would most influence them to move their legacy data to the cloud. The major benefit cited by 66% was the integration of data and legacy archives. Some 59% cited the cloud as a way to centrally manage the archiving of all data including data from Office 365. Other reasons mentioned included data security and encryption, advanced records management, artificial intelligence-powered regulatory and compliance checking, and fast and accurate centralized search. Of course, anxiety over cyber threats and cyberattacks also plays a role in the decision to migrate legacy data. Among the respondents, 42% said that concerns over cybersecurity and ransomware attacks slightly or significantly accelerated the migration plans.


View cloud architecture through a new optimization lens

IT and enterprise management in general is getting wise to the fact that a solution that “works” or “seems innovative” does not really tell you why operations cost so much more than forecast. Today we need to audit and evaluate the end state of a cloud solution to provide a clear measure of its success. The planning and development phases of a cloud deployment are great places to plan and build in audit and evaluation procedures that will take place post-development to gauge the project’s overall ROI. This end-to-beginning view will cause some disturbance in the world of those who build and deploy cloud and cloud-related solutions. Most believe their designs and builds are cutting edge and built with the best possible solutions available at the time. They believe their designs are as optimized as possible. In most instances, they’re wrong. Most cloud solutions implemented during the past 10 years are grossly underoptimized. So much so that if companies did an honest audit of what was deployed versus what should have been deployed, a very different picture of a truly optimized cloud solution would take shape.


How Blockchain Startups Think about Databases and dApp Efficiency

When applications are built on top of a blockchain, these applications are inherently decentralized — hence referred to as dApps (decentralized applications). Most dApps today leverage a Layer 1 (L1) blockchain technology like Ethereum as their primary form of storage for transactions. There are two primary ways that dApps interact with the underlying blockchain: reads and writes. Let’s use an NFT and gaming dApp that rewards gamers who win coins that they can then use to purchase NFTs as an example: Writes are performed to an L1 chain whenever a gamer wins and coins are added to their wallet; reads are performed when a gamer logs into the game and needs to pull the associated NFT metadata for their game character (think stats, ranking, etc.). As an early-stage dApp building the game described above, writing directly to Ethereum is prohibitive because of slow performance (impacting latency) and high cost. To help developers in the dApp ecosystem, sidechains and Layer 2 (L2) solutions like Polygon improve performance. 


Google calls for new government action to protect open-source software projects

“We need a public-private partnership to identify a list of critical open source projects — with criticality determined based on the influence and importance of a project — to help prioritize and allocate resources for the most essential security assessments and improvements,” Walker wrote. The blog post also called for an increase in public and private investment to keep the open-source ecosystem secure, particularly when the software is used in infrastructure projects. For the most part, funding and review of such projects are conducted by the private sector. The White House had not responded to a request for comment by time of publication. “Open source software code is available to the public, free for anyone to use, modify, or inspect ... That’s why many aspects of critical infrastructure and national security systems incorporate it,” wrote Walker. “But there’s no official resource allocation and few formal requirements or standards for maintaining the security of that critical code. In fact, most of the work to maintain and enhance the security of open source, including fixing known vulnerabilities, is done on an ad hoc, volunteer basis.”


How AI Can Improve Software Development

By leveraging AI to automate the identification of the specific lines of code that require attention, developers can simply ask this AI-driven knowledge repository where behaviors are coming from—and quickly identify the code associated with that behavior. This puts AI squarely in the position of intelligence augmentation, which is key to leveraging its capabilities. This novel approach of AI reinterprets what the computation represents and converts it into concepts, therefore “thinking” about the code in the same way humans do. The result is that software developers no longer have to unearth the intent of previous developers encoded in the software to find potential bugs. Even better, developers are able to overcome the inadequacies of automated testing by using AI to validate that they haven’t broken the system before they compile or check in the code. The AI will forward simulate the change and determine whether it’s isolated to the behavior under change. The result is the bounds of the change are confined to the behavior under change so that no unintended consequences arise.


A busy year ahead in low-code and no-code development

There's logic to developers embracing low-code and no-code methodologies. "Developers love to code, but what they love more is to create, regardless the language," says Steve Peak, founder of Story.ai. "Developers are always seeking new tools to create faster and with more enjoyment. Once low and no code grows into a tool that developers have more control over what they truly need to get done; they unquestionably will use them. It helps them by getting work done quicker with more enjoyment, examples of this are everywhere and are engrained into most developers. A seek for the next, better thing." At the same time, there is still much work to be done -- by professional developers, of course -- before true low-code or no-code capabilities are a reality. "Even the most popular tools in the market requite significant API knowledge and most likely JavaScript experience," says Peak. "The products that do not require API or JavaScript experience are limited in functionality and often resemble that of custom Kanban boards and more media rich spreadsheets wherein information logic is mostly entirely absent."


The Future of the Metaverse + AI and Data Looks Bright

The next generation of VR headsets will collect more user information, including detecting the stress level of the user, and even facial recognition. “We’re going to see more capabilities and really understanding the biometrics that are generated from an individual, and be able to use that to enhance the training experience,” he says. That data collection will enable a feedback loop with the VR user. For example, if an enterprise is using VR to simulate a lineman repairing a high-voltage wire, the headset will be able to detect the anxiety level of the user. That information will inform the enterprise how to personalize the next set of VR lessons for the employee, Eckert says. “Remember, you’re running nothing more than software on a digital device, but because it senses three dimensions, you can put input through gesture hand control, through how you gaze, where you gaze. It’s collecting data,” he says. “Now that data can then be acted upon to create that feedback loop. And that’s why I think it’s so important. In this immersive world that we have, that feedback …will make it even that much more realistic of an experience.”


Data Engineering and Analytics: The End of (The End Of) ETL

Data virtualization does not purport to eliminate the requirement to transform data. In fact, most DV implementations permit developers, modelers, etc., to specify and apply different types of transformations to data at runtime. Does DAF? That is, how likely is it that any scheme can eliminate the requirement to transform data? Not very likely at all. Data transformation is never an end unto itself. It is rather a means to the end of using data, of doing stuff with data. ... Because this trope is so common, technology buyers should be savvy enough not to succumb to it. Yet, as the evidence of four decades of technology buying demonstrates, succumb to it they do. This problem is exacerbated in any context in which (as now) the availability of new, as-yet-untested technologies fuels optimism among sellers and buyers alike. Cloud, ML and AI are the dei ex machina of our age, contributing to a built-in tolerance for what amounts to utopian technological messaging. That is, people not only want to believe in utopia -- who wouldn’t wish away the most intractable of sociotechnical problems? -- but are predisposed to do so.



Quote for the day:

"Authority without wisdom is like a heavy axe without an edge, fitter to bruise than polish." -- Anne Bradstreet

Daily Tech Digest - January 15, 2022

Open source and mental health: the biggest challenges facing developers

The very nature of open source projects means its products are readily available and ripe for use. Technological freedom is something to be celebrated. However, it should not come at the expense of an individual’s mental health. Open source is set up for collaboration. But in reality, a collaborative approach does not always materialise. The accessibility of these projects means that many effective pieces of coding start as small ventures by individual developers, only to snowball into substantial projects on which companies rely, but rarely contribute back to it. Open source is for everyone, but responsibility comes along with that. If we want open source projects to stay around, any company using open source projects should dedicate some substantial time contributing back to open source projects, avoiding unreasonable strain on individual developers by doing so. Sadly, 45% of developers report a lack of support with their open source work. Without sufficient support, the workload to maintain such projects can place developers under enormous pressure, reducing confidence in their ability and increasing anxiety.


Chaos Engineering - The Practice Behind Controlling Chaos

I always tell people that Chaos Engineering is a bit of a misnomer because it’s actually as far from chaotic as you can get. When performed correctly everything is in control of the operator. That mentality is the reason our core product principles at Gremlin are: safety, simplicity and security. True chaos can be daunting and can cause harm. But controlled chaos fosters confidence in the resilience of systems and allows for operators to sleep a little easier knowing they’ve tested their assumptions. After all, the laws of entropy guarantee the world will consistently keep throwing randomness at you and your systems. You shouldn’t have to help with that. One of the most common questions I receive is: “I want to get started with Chaos Engineering, where do I begin?” There is no one size fits all answer unfortunately. You could start by validating your observability tooling, ensuring auto-scaling works, testing failover conditions, or one of a myriad of other use cases. The one thing that does apply across all of these use cases is start slow, but do not be slow to start.


How to ward off the Great Resignation in financial services IT

The upshot for CIOs in financial services: You must adapt to recruit and keep talent – and build a culture that retains industry-leading talent. After recently interviewing more than 20 former financial services IT leaders who departed for other companies, I learned that it isn’t about a bad boss or poor pay. They all fondly remembered their time at the firms, yet that wasn’t enough to keep them. ... It is a journey that begins with small steps. Find something small to prove out and get teams to start working in this new way. Build a contest for ideas – assign numbers to submissions so executives have no idea who or what level submitted, and put money behind it. Have your teams vote on the training offered. This allows them to become an active participant and feel their opinions matter. It can also improve the perception that the importance of technology is prioritized as you give access to not only learn new technologies but encourage teams to learn. ... The better these leaders work together, the more that impact, feeling of involvement, and innovation across teams can grow. 


DataOps or Data Fabric: Which Should Your Business Adopt First?

Every organization is unique, so every Data Strategy is equally unique. There are benefits to both approaches that organizations can adopt, although starting with a DataOps approach is likely to show the largest benefit in the shortest amount of time. DataOps and data fabric both correlate to maturity. It’s best to implement DataOps first if your enterprise has identified setbacks and roadblocks with data and analytics across the organization. DataOps can help streamline the manual processes or fragile integration points enterprises and data teams experience daily. If your organization’s data delivery process is slow to reach customers, then a more flexible, rapid, and reliable data delivery method may be necessary, signifying an organization may need to add on a data fabric approach. Adding elements of a data fabric is a sign that the organization has reached a high level of maturity in its data projects. However, an organization should start with implementing a data fabric over DataOps if they have many different and unique integration styles, and more sources and needs than traditional Data Management can address.


How to Repurpose an Obsolete On-Premises Data Center

Once a data center has been decommissioned, remaining servers and storage resources can be repurposed for applications further down the chain of business criticality. “Servers that no longer offer critical core functions may still serve other departments within the organization as backups,” Carlini says. Administrators can then migrate less important applications to the older hardware and the IT hardware itself can be located, powered, and cooled in a less redundant and secure way. “The older hardware can continue on as backup/recovery systems, or spare systems that are ready for use should the main cloud-based systems go off-line,” he suggests. Besides reducing the need to purchase new hardware, reassigning last-generation data center equipment within the organization also raises the enterprise's green profile. It shows that the enterprise cares about the environment and doesn’t want to add to the already existing data equipment in data centers, says Ruben Gamez CEO of electronic signature tool developer SignWell. “It's also very sustainable.”


Mitigating Insider Security Threats with Zero Trust

Zero Trust aims at minimising lateral movement of attacks in an organisation, which is the most common cause of threat duplication or spread of malwares and viruses. In expeditions during organising capture the flag events, we often give exercises to work with metasploits, DDos attacks and understanding attack vectors and how attacks move. For example, a phishing email attack targeting a user was used which had a false memo that was instructed to be forwarded by each employee to their peers. That email had MS powershell malware embedded and it was used to depict how often good looking emails are too good to be genuine. And since, just like that, the attack vectors are often targeted to be inside of organisations, Zero Trust suggests to always verify all network borders with equal scrutiny. Now, as with every new technology, Zero Trust is not built in a day, so it might sound like a lot of work for many small businesses as security sometimes comes across as an expensive investment. 


Trends in Blockchain for 2022

Blockchain is ushering in major economic shifts. But the cryptocurrency market is still a ‘wild west’ with little regulation. According to recent reports, it appears the U.S. Securities and Exchange Commission is gearing up to more closely regulate the cryptocurrency industry in 2022. “More investment in blockchain is bringing it into the mainstream, but what’s holding back a lot of adoption is regulatory uncertainty,” said Parlikar. Forbes similarly reports regulatory uncertainty as the biggest challenge facing blockchain entrepreneurs. Blockchain is no longer relegated to the startup domain, either; well-established financial institutions also want to participate in the massive prosperity, said Parlikar. This excitement is causing a development-first, law-later mindset, similar to the legal grey area that followed Uber as it first expanded its rideshare business. “[Blockchain] businesses are trying to hedge risk,” Parlikar explained. “We want to comply and aren’t doing nefarious things intentionally—there’s just a tremendous opportunity to innovate and streamline operations and increase the end-user experience.”


New Vulnerabilities Highlight Risks of Trust in Public Cloud

The most significant of the two vulnerabilities occurred in AWS Glue, a serverless integration service that allows AWS users to manage, clean, and transform data, and makes the datastore available to the user's other services. Using this flaw, attackers could compromise the service and become an administrator — and because the Glue service is trusted, they could use their role to access other users' environments. The exploit allowed Orca's researchers to "escalate privileges within the account to the point where we had unrestricted access to all resources for the service in the region, including full administrative privileges," the company stated in its advisory. Orca's researchers could assume roles in other AWS customers' accounts that have a trusted relationship with the Glue service. Orca maintains that every account that uses the Glue service has at least one role that trusts the Glue service. A second vulnerability in the CloudFormation (CF) service, which allows users to provision resources and cloud assets, allowed the researchers to compromise a CF server and run as an AWS infrastructure service.


Why Saying Digital Transformation Is No Longer Right

Technology is multiplicative, it doesn't know whether it's multiplying a positive or a negative. So, if you have bad customer service at the front counter, and you add technological enablement - voila! You're now able to deliver bad service faster, and to more people than ever before! The term ‘Digital Transformation’ implies a potentially perilous approach of focusing on technology first. In my career as a technology professional, I’ve seen my share of project successes and failures. The key differentiator between success and failure is the clarity of the desired outcome right from the start of the initiative. I had a colleague who used to say: “Projects fail at the start, most people only notice at the end.” Looking back at the successful initiatives which I was a part of, they possessed several common key ingredients: the clarity of a compelling goal, the engagement of people, and a discipline for designing enablement processes. With those ingredients in place, a simple, and reliable enabling tool (the technology), developed using clear requirements acts like an unbelievable accelerant.


Four key lessons for overhauling IT management using enterprise AI

One of the greatest challenges for CIOs and IT leaders these days is managing tech assets that are spread across the globe geographically and across the internet on multi-cloud environments. On one hand, there’s pressure to increase access for those people who need to be on your network via their computers, smartphones and other devices. On the other hand, each internet-connected device is another asset to be monitored and updated, a potential new entry point for bad actors, etc. That’s where the scalability of automation and machine learning is essential. As your organisation grows and becomes more spread out, there’s no need to expand your IT department. A unified IT management system, powered by AI, will keep communication lines open while continually alerting you to threats, triggering appropriate responses to input and making updates across the organisation. It is never distracted or overworked. ... When it comes to these enterprise AI solutions, integration can be more challenging. And in some cases, businesses end up spending as much on customising the solution as they did on the initial investment.



Quote for the day:

"Strong leaders encourage you to do things for your own benefit, not just theirs." -- Tim Tebow

Daily Tech Digest - January 13, 2022

Crafting an Agile Enterprise Architecture

The blueprint for a truly agile architecture requires fundamental shifts in the business dynamic. There are three essentials that stand out as requirements for attaining agility across the entire enterprise. ... The bedrock of successful enterprise-wide agility is collaboration. Innovation will flourish when it is decentralized, and isolated silos give way to cross-functional, agile and self-organizing teams. An isolated IT team leads to delayed projects, overrun budgets, productivity that is hard to measure and disconnects between business and operations. Every department must be involved in supporting and achieving key business goals. Teams containing a mix of business line and IT professionals accelerate development and delivery, greatly reducing time to market. Based upon a shared customer-centric goal and vision, there is shared ownership of outcomes and a deeper level of engagement throughout the enterprise. Daily communication and collaborative feedback nurtures creativity, problem-solving, and drive continuous integration and continuous development. 


The Next Evolution of the Database Sharding Architecture

Considering the new challenges databases are facing, is there an efficient and cost-effective way to leverage these types of databases and enhance them through some new practical ideas? Database transparent sharding is one of the best answers to this question. One of the best techniques for this is to split the data into separate rows and columns. This splitting of large database tables into multiple small tables are known as shards. The original table is divided into either vertical shards or horizontal shards. Terminologies used to label these tables can be subjective to ‘VS1’ for vertical shards and ‘HS1’ for flat shards. The number represents the first table or the first schema. Then 2 and 3, and so on. These subsets of data are referred to as the table's original schema. So what is the difference between sharding and partitioning? Both sharding and partitioning include breaking large data sets into smaller ones. But a key difference is that sharding implies that the breakdown of data is spread across multiple computers, either as horizontal or vertical partitioning. On the other hand, partitioning is when the database is broken down into different subsets but held within a single database, sometimes referred to as the database instance.


How to make your home office a more pleasant place to work

Eliminating your commute may actually have some negative impacts on your body, especially if your commute involved some amount of walking or biking. These days you could conceivably not leave your home for days on end, and being that sedentary really isn’t good for you. Get up and move, and get your heart pumping. You don’t need a fancy home gym. Get a yoga mat and watch some YouTube workouts that require only your body weight. Force yourself to go for walks, even when you don’t wanna. Stretch! ... Suddenly you have unfettered access to your fridge and snack cabinets, and it can be very tempting to just graze all day. So, what do you do? Here’s the strategy that has worked better for me than anything else: Fill your kitchen with healthy foods, and only healthy foods. Yep, really. If I wander into my kitchen, wanting a snack, and there are chips there, I’m going to eat those chips. But if I go there and the only snackable foods are carrots and sugar snaps, then that’s what I’m going to eat. Basically, I have to use my tendency toward slothfulness against my tendency for gluttony, and it really works!


‘Fully Undetected’ SysJoker Backdoor Malware Targets Windows, Linux & macOS

Once it finds a target, SysJoker masquerades as a system update, researchers said, to avoid suspicion. Meanwhile, it generates its C2 by decoding a string retrieved from a text file hosted on Google Drive. “During our analysis the C2 has changed three times, indicating the attacker is active and monitoring infected machines,” researchers noted in the report. “Based on victimology and malware’s behavior, we assess that SysJoker is after specific targets.” SysJoker’s behavior is similar for all three operating systems, according to Intezer, with the exception that the Windows version makes use of a first-stage dropper. After execution, SysJoker sleeps for a random amount of time, between a minute and a half and two minutes. Then, it will create the C:\ProgramData\SystemData\ directory and copy itself there using the file name “igfxCUIService.exe” – in other words, it masquerades as the Intel Graphics Common User Interface Service. After gathering system information (mac address, user name, physical media serial number and IP address), it collects the data into a temporary text file.


Who is going to Support your Next Mobile App Project? Hint: Not React Native or Flutter

React Native and Flutter are quality projects built by very capable and talented teams. The problem is that they are both incredibly complex and the massive surface area of each project has led to a huge volume of bug reports and other issues, and neither project offers dedicated support. For users of these projects, this complexity and large issue volume, combined with a lack of official support options, ultimately leads to a situation where there are very few options for getting help and support when there’s an issue. Google and Facebook are notorious for lacking a strong customer support culture, even for their paid products. Support is just not their most important priority. This tradeoff enables them to build services that reach mind-boggling levels of scale, but is at odds with what traditional teams and enterprises expect when it comes to vendors supporting their products. Culturally, Google and Facebook just don’t do support well and certainly not when it comes to open source or developer-focused products.


Meeting Patching-Related Compliance Requirements with TuxCare

TuxCare identified an urgent need to remove the business disruption element of patching. Our live kernel patching solution, first rolled out under the brand KernelCare, enables companies such as yours to patch even the most critical workloads without disruption. Instead of the patch, reboot, and hope that everything works routine, organizations that use the KernelCare service can rest assured that patching happens automatically and almost as soon as a patch is released. KernelCare addresses both compliance concerns and threat windows by providing live patching for the Linux Kernel within hours of a fix being available, thus reducing the exposure window and meeting or exceeding requirements in compliance standards. Timeframes around patching have consistently been shrinking in the past couple of decades, from many months to just 30 days to combat fast-moving threats – KernelCare narrows the timeframe to what's about as minimal a window as you could get.


How to achieve data interoperability in healthcare: tips from ITRex

Fast Healthcare Interoperability Resource (FHIR) was released in 2014 by HL7 as an alternative to HL7 v2. It relies on RESTful web services and open web technologies for communication, which can enhance interactions among legacy healthcare systems. Additionally, RESTful API provides a one-to-many interface, accelerating new data partners onboarding. FHIR’s interoperability merits are not limited to EHR and similar systems but extend to mobile devices and wearables. ... Digital Imaging and Communications in Medicine (DICOM) is a standard for communicating and managing medical images and related data. The National Electrical Manufacturer’s Association developed this standard. DICOM can integrate medical imaging devices produced by different manufacturers by providing a standardized image format. It allows healthcare practitioners to access and share DICOM-compliant images even if they are using different devices for image capturing. At ITRex, we had a large project involving the DICOM standard and medical imaging interoperability.


Enterprise Data: Prepare for More Change in This Hot Area of Tech

Enterprises using IoT can use embedded databases at the edge to copy aggregated sensor data to a back-end database when online. This brings the value of data directly to operations. At the same time, data from all the devices is being managed in the back-end database to develop analytics to advance the business. Artificial intelligence chips are taking center stage in these environments. AI chips refers to a new generation of microprocessors that are specifically designed to process artificial intelligence tasks faster and use less power. They are particularly good at dealing with artificial neural networks and are designed to do the machine learning model training and inference at the edge. We’ll also see the need for higher performance from edge computing hardware since better sensors and larger AI models now enable a host of new applications. There is a growing requisite to infer more data and then make decisions without sending data to the cloud. Also distributed sites can be linked together with an enterprise computing environment to create a unified computing environment.


How businesses overcome data saving and storing difficulties

First, companies that track data over time are able to understand trends and compare data points. With this information at hand, companies can start the analytical process of asking questions based on that knowledge. Then businesses can create new value from this data. Secondly, when companies begin collecting data, they are boosting their company’s transparency and transferability. This improves processes in any business. For example, a no-code dashboard can make data more objective and minimizes subjectivity. With the proper data tool, internal discussions will be centered more on the business objectives and goals. When having the right data at hand, it is easier to ask the right questions, such as how to improve sales after a product launch. Essentially, data eliminates the guessing. Both of these explanations of why storing data is important can be seen from a digital company or a physical one. Data is simply a way to understand your business better, no matter if it is big or small, analyze information, learn, improve and repeat the successes.


Securing your business in the hybrid workplace

Leaders have the chance now to reflect on what was learnt in the past two years and build on their company’s new digital foundations to create a secure, hybrid workplace fit for the post-pandemic economy. You should underpin your hybrid work goals with the rapid advances in technology now available to you, and to set up the foundations to accelerate growth – but simplicity will be key. For instance, with Microsoft 365 Business Premium, you can centrally configure, manage, and protect company-issued and employee’s personal devices accessing business information and services across Windows, Mac, Android or iOS. Simple features such as multi-factor authentication (MFA) can prevent 99 per cent of identity attacks by asking for additional evidence beyond the user’s password to grant access. Adding MFA for remote employees requires them to enter a security code received by a text, phone call or authentication app on their phone when they log into Microsoft 365. So, if a hacker gets hold of someone’s password through a phishing attack, they can’t use it to access sensitive company information.



Quote for the day:

"A belief is not merely an idea the mind possesses, it is an idea that possesses the mind." -- Robert Oxton Bolt

Daily Tech Digest - January 12, 2022

NIST Updates Cybersecurity Engineering Guidelines

NIST’s publication is a resource for computer engineers and other professionals on the programming side of cybersecurity efforts. “This publication addresses the engineering-driven perspective and actions necessary to develop more defensible and survivable systems, inclusive of the machine, physical, and human components that compose those systems and the capabilities and services delivered by those systems,” the document reads. Spanning over 200 pages, the publication takes a holistic approach to systems engineering. NIST researchers give an overview of the objectives and concepts of modern security systems, primarily regarding the protection of a system's digital assets. One of the key updates NIST authors made in the latest version of the publication was a fresh emphasis on security assurances. In software systems engineering, assurance is represented by the evidence that a given system’s security procedures are robust enough to mitigate asset loss and prevent cyber attacks. Ron Ross, an NIST fellow and one of the authors of the document, told Nextgov that system assurances act as justifications that a security system can operate effectively.


9 ways that cybersecurity may change in 2022

On the plus side, digital wallets can ensure the identity of the user in business or financial transactions, reduce fraud and identity theft, and shrink the cost and overhead for organizations that typically create physical methods of authentication. On the minus side, a person can be at risk if their mobile device is lost or stolen, a device without power due to an exhausted battery is of little use when trying to present your digital IT, and any digital verification that requires connectivity will fail if there's no cellular or Wi-Fi available. ... Shadow or zombie APIs pose a security risk, as they're typically hidden, unknown and unprotected by traditional security measures. More than 90% of attacks in 2022 will focus on APIs, according to Durand. And for organizations without the right type of API controls and security practices, these shadow APIs will become the weak link. ... Information technology and operational technology will collide as IT teams assume responsibility for the security of physical devices. This trend will require interoperability between IT and OT, leading to a convergence of technology to determine who can physically get in a building and who can access key applications.


First for software, agile is a boon to manufacturing

Overall, applying agile methodologies should be a priority for every manufacturer. For aerospace and defense companies, whose complex projects have typically followed the long time horizons of waterfall development, agile design and development are needed to propel the industry into the age of urban air mobility and the future of space exploration. ... Over the past decade, agile software development has focused on DevOps—”development and operations”— which creates the interdisciplinary teams and culture for application development. Likewise, design companies and product manufacturers have taken the lessons of agile and reintegrated them into the manufacturing life cycle. As a result, manufacturing now consists of small teams iterating on products, feeding real-world lessons back into the supply chain, and using software tools to speed collaboration. In the aerospace and defense industry, well known for the complexity of its products and systems, agile is delivering benefits.


Observability, AI And Context: Protecting APIs From Today's (And Tomorrow's) Attacks

Today's digital economy is built on a foundation of APIs that enable critical communications, making it possible to deliver a richer set of services faster to users. Unfortunately, today's security solutions focus on an outmoded way of thinking. Most current organizations deploy security solutions and practices that revolve around network security, intrusion detection and mitigating application vulnerabilities. However, for modern API-driven applications that have become the de-facto deployment model for applications that operate in the cloud, these traditional security practices simply do not scale to meet the challenges of today's organizations. Due to the incredible complexity of APIs, as well as the breadth and depth of their deployment across organizations, security and IT teams need to tackle this problem in a structured process that takes into account API application security best practices and procedures that constantly evaluate an organization's APIs, the level of their security posture and their ability to automate remediated security actions when they are attacked.


2022 will be the year we all start talking about online data collection

From uncovering trends to conducting market research, there are countless reasons why businesses collect publicly available web data from their competitors. Though the competitors in question often also engage in data collection themselves, most will regularly block access attempts and make site changes to prevent their public data from being accessed, even though the information targeted is on public display. All this could be about to change. While it may seem counterintuitive – after all, why would you want to give away information to your competitors – some businesses are beginning to realise that it’s in their best interests to allow their public data to be collected by responsible, well-defined, and compliant data practitioners. Firstly, preventing data collection is like a game of whack-a-mole: When you block one tactic, smart practitioners will simply find another. Secondly, accepting some forms of data collection will enable businesses to accurately distinguish between organic user traffic and collector traffic, giving them a clearer insight into what data is being collected and by whom.


Omnichannel E-commerce Growth Increases API Security Risk

API-led connectivity overcomes obstacles that retailers face gathering data from disparate systems to then consolidate the data into monolithic data warehouses. Since each individual system updates separately, information may be out-of-date by the time it hits the database. APIs enable retailers to build an application network that serves as a connectivity layer for data stores and assets in the cloud, on-premises or in hybrid environments. As a result, mobile applications, websites, IoT devices, CRM and ERP systems (order management, point of sale, inventory management and warehouse management) can all work as one coherent system that connects and shares data in real-time. ... The downside to this rapid growth and development in e-commerce has been a concerning rise in API security attacks. Here, threat actors have executed numerous high-profile breaches against public-facing applications. For example, developers use APIs to connect resources like web registration forms to various backend systems. This tasking flexibility, however, also creates an entrance for automated attacks.


Collaborative Governance Will Be The Driver of The API Economy

Most companies with API programs don’t have advanced API management tools, and they can only do a couple of releases a year from inception to production. Collaborative governance, with an automated platform, is the future to plug the gap from a business standpoint and help them get to market quicker and faster. A whole team would understand how APIs mature and prepare responses for the varying requirements. ... Collaborative governance democratizes the API building process as anybody in a team should be able to build, manage, and maintain APIs. Add a low-code, results-driven platform or AI-assisted development tools to the mix, and developers won’t always need to learn about new tools and technologies from scratch or interact with multiple parties. Through centralizing ownership using version-controlled configuration, enterprises can avoid the disruption caused by manual errors or configuration changes and enable reusability. Time to production is also reduced due to continuous integration and delivery (CI/CD). 


How AI helps essential businesses respond to climate change

Underpinning the AI-based forecast platform, is a convolutional neural network (CNN) model. This extracts features from radar reflectivity and meteorological satellite images. This is supported by a trained machine-learning model, which is capable of performing highly accurate and close-to-real-time local weather forecasting in minutes. Meanwhile, a generative adversarial network (GAN) works to generate forecast images with exceptional clarity and detail. One of the benefits of this AI-based prediction model, is that it outperforms the traditional physics-based model; for example, the Global/Regional Assimilation and PrEdiction System (GRAPES) requires hours to generate forecasting data, which is far behind the pace needed for organisations that need to make near real-time decisions based on anticipated weather events. Some of the data is conveyed via high-resolution imagery with one-kilometre grid spacing, with updates every 10 minutes providing fresh insights, enabling real-time decisions to be made to plans or arrangements based on unfolding or predicted weather events. 


Stargate gRPC: The Better Way to CQL

In 2008, Google developed, open-sourced, and released Protocol Buffers — a language-neutral mechanism for serializing structured data. In 2015, Google released gRPC (also open source) to incorporate Protocol Buffers into work to modernize Remote Procedure Call (RPC). gRPC has a couple of important performance characteristics. One is the improved data serialization, making data transit over the network much more efficient. The other is the use of HTTP/2, which enables bidirectional communication. As a result, there are four call types supported in gRPC: Unary calls; Client-side streaming calls; Server-side streaming calls; and Bidirectional calls, which are a composite of client-side and server-side streaming. Put all this together and you have a mechanism that is fast — very fast when compared to other HTTP-based APIs. gRPC message transmission can be 7x to 10x faster than traditional REST APIs. In other words, a solution based on gRPC could offer performance comparable to native drivers.


2022 promises to be a challenging year for cybersecurity professionals

One thing the pandemic has demonstrated is an unprecedented shift in endpoints, workloads, and where data and applications reside. Today, the Federal workforce remains mostly remote and telework is being conducted over modern endpoints such as mobile devices and tablets, and the applications and productivity tools are now cloud-hosted solutions. To be effective, those additional endpoints and mobile devices need to be included in the Agency’s asset inventory, the devices need to be managed and validated for conformance with the Agency’s security policies, and the identities of the user and their device must be known and validated. Additionally, the applications that are cloud-hosted must be included in the zero-trust framework including being protected by strong, conditional access controls, effective vulnerability management and automated patch management processes. I am optimistic that we can make great strides towards improving cybersecurity in 2022, if we are smart and pragmatic about prioritization, risk management, and leveraging automation to help us work smarter not harder.



Quote for the day:

"Making those around you feel invisible is the opposite of leadership." --  Margaret Heffernan

Daily Tech Digest - January 11, 2022

4 healthcare cloud security recommendations for 2022

Under the Health Insurance Portability and Accountability Act (HIPAA), cloud service providers aren’t considered business associates, which are entities that use or disclose protected health information (PHI). Companies that perform services such as claims administration, quality assurance, benefits management, and billing qualifies as business associates. That said, Chung encouraged healthcare organizations to push their CSPs to sign a business associate agreement, or BAA, to ensure that the provider assumes responsibility for safeguarding the organization’s PHI. “If a CSP is not willing to sign a BAA, then you have to ask yourself, “Do they treasure your data as much as you do?” Chung said. “The BAA provides assurance to organizations that we protect their data, that we provide training to our employees, and that we store and process consumer data securely.” Healthcare’s traditional network perimeter no longer exists. Many physicians and nurses may work at multiple locations for the same institution, sometimes visiting several locations in one day, or clinical staff may conduct research at a nearby university.


How To Implement Efficient Test Automation In The Agile World

In traditional environments, we have predefined builds that can be weekly, fortnightly or sometimes even monthly. One of the reasons is that these deployments take time. The problem with this approach is that we have to wait for the predefined dates to get the bugs fixed or to get the new features implemented, so there is a delay. The second reason was – by the time testers finish up with the testing and come up with bugs and defects, the programmers have moved on to different pieces of implementation and have less interest in resolving the bugs of the older application. This approach also delays the time for making the feature available in production. Building and deployments are the entities that are repetitive and sometimes boring.. ... Automating the testing behind the GUI is comparatively easier than automating the actual GUI. Another advantage is that irrespective of the UI changes, functionality remains intact. Even if some of the UI element is changed, the functionality of the feature does not change. This technique mainly focuses on the business logic and rules.


16 irresistible cloud innovations

The major public clouds and several database vendors have implemented planet-scale distributed databases with underpinnings such as data fabrics, redundant interconnects, and distributed consensus algorithms that enable them to work efficiently and with up to five 9’s reliability (99.999% uptime). Cloud-specific examples include Google Cloud Spanner (relational), Azure Cosmos DB (multi-model), Amazon DynamoDB (key-value and document), and Amazon Aurora (relational). Vendor examples include CockroachDB (relational), PlanetScale (relational), Fauna (relational/serverless), Neo4j (graph), MongoDB Atlas (document), DataStax Astra (wide-column), and Couchbase Cloud (document). ... Companies with large investments in data centers often want to extend their existing applications and services into the cloud rather than replace them with cloud services. All the major cloud vendors now offer ways to accomplish that, both by using specific hybrid services (for example, databases that can span data centers and clouds) and on-premises servers and edge cloud resources that connect to the public cloud, often called hybrid clouds.


Analytics transformation in wealth management

Early success stories are encouraging, but they are the exception rather than the rule. More often, firms have started the transformation journey but have faltered along the way. Common reasons include a lack of ownership at senior levels and budgetary or strategic restraints that prevent project teams from executing effectively. The challenges of transforming service models are significant but not insurmountable. Indeed, as analytics use cases become more pervasive, implementation at scale becomes more achievable. In the following paragraphs, we present five ingredients of an analytics-based transformation (Exhibit 3). These can be supported by strong leadership, a rigorous focus on outcomes, and a willingness to embrace new ways of working. Indeed, managers who execute effectively will get ahead of the competition and be much more adept in meeting client needs. ... Analytics-driven transformations are often restricted to narrow silos occupied by a few committed experts. As a result, applications fail to pick up enough momentum to make a real difference to performance.


Is Data Science a Dying Career?

Firstly, data science has never been about re-inventing the wheel or building highly complex algorithms. The role of a data scientist is to add value to an organization with data. And in most companies, only a very small portion of this involves building ML algorithms. Secondly, there will always be problems that cannot be solved by automated tools. These tools have a fixed set of algorithms you can pick from, and if you do find a problem that requires a combination of approaches to solve, you will need to do it manually. And although this doesn’t happen often, it still does — and as an organization, you need to hire people skilled enough to do this. Also, tools like DataRobot can’t do data pre-processing or any of the heavy lifting that comes before model building. As someone who has created data-driven solutions for startups and large companies alike, the situation is very different from what it’s like dealing with Kaggle datasets. There is no fixed problem. Usually, you have a dataset, and you are given a business problem. 


6 cloud security trends to watch for in 2022

More organizations are starting to fully adopt Infrastructure-as-Code (IaC) to create fully autonomous cloud-based environments. From a security perspective, ensuring that the supply chain from the code to production is protected and monitored is becoming an increasing concern for organizations. We are seeing tools in this space starting to mature, and new strategies are being implemented. For example, you can do things like pre-validation of configurations and architecture, ensuring your architecture and code are compliant and secured before it even moves to production. ... Multi-cloud strategies are here to stay – and many enterprises are picking technologies best suited for their platforms while also creating resilient architectures that utilize more than one cloud service provider. We will soon see this adoption model mature along with multi-cloud security practices and tools. Additionally, we see “multi-cloud” enveloping edge computing, which will continue to extend onto factory floors, as well as into branch offices and private data centers. 


How Low-Code Enables the Composable Enterprise

A composable enterprise aims to create an application architecture wherein enterprises can deliver various functions through composition (as against development), by leveraging packaged business capabilities (PBCs). Gartner estimates that by 2023, 30% of new applications will be delivered, priced, and consumed as libraries of packaged business capabilities, up from fewer than 5% in 2020. To be fair, this run-up to a composable enterprise is not a fresh-of-the-press revelation. Enterprises have been attempting to move from hardcore coding-based development to a more service-and-composition-oriented architecture over the last couple of decades, albeit only in pockets and not as fast as they would have wished. Composable enterprise has become the need of the hour. And this is being driven by the sense of urgency created by multi-faceted disruption across industries, coupled with technological advancements that make it possible for organizations to accomplish it at an enterprise scale.


10 Things Will Define the Digital Transformation in 2022

The reality of the digital shift is that consumers are no longer constrained by how far away something is: the item they want to buy, the service provider they want to engage, the employer they want to work for, the trainer they want to buff them up, or the concert or movie they want to watch. Or just about anything else they want to do. Technology is making once-physical interactions immersive digital experiences – sometimes complementing the physical world, and sometimes replacing the activities once done there. For businesses, this is both a threat and an opportunity – an undeniable dynamic driving the evolution of the connected economy. In retail. In grocery. In entertainment. In work. In banking. In just about everything — including many healthcare services. Proximity is no longer a barrier, and those who wish to make it a competitive advantage now have to one-up the digital alternatives that consumers find easier, more convenient and less wasteful of their precious time.


What is the role of the CTO?

The CTO role also entails effective management of risks, which are also changing all the time as the organisation innovates. Finding possible risks, and planning how to mitigate them as early as possible is particularly important for any digital transformation initiatives such as cloud migration. ...  “However, it is inevitable that as part of that migration there will be some component misconfigured, a vulnerability uncovered in a new technology, or a human error that introduces an unintended path to access a system. The CTO should understand the possible impacts a breach in a specific application could have to the business as a starting point, and then assess a difficult question – how likely is that risk to be realised? “As CTO, you must consider all the surrounding process and infrastructure needed to mitigate the security risks of an initiative. Are the assumptions you are making about the capabilities of third party vendors, and your own security organisation, accurate today and in the future? Perhaps the ROI won’t be quite as high if this is fleshed out in detail upfront, but that will be a far better result than being caught flat-footed after a production roll-out.”


How China's Algorithm Regulation Affects Businesses

Algorithm-powered recommendation services offer relevant suggestions for users based on their history of choices and are popularly used by video streaming services, e-commerce companies and dating apps. The CAC's regulation, however, is not confined to just search results or personalized recommendation algorithms that push e-commerce products. It also applies to dispatching and decision-making algorithms that are used by transport and delivery services and to generative or synthetic-type algorithms used in gaming and virtual environments, says Ed Sander, China tech and policy expert and co-founder of the technology and business website ChinaTalk, in a blog post. Companies that use algorithm-based recommendations are required to disclose service rules for algorithmic recommendations and periodically review, assess and verify algorithm mechanisms, according to the regulation. The new regulation also says that companies must ensure that their algorithmic models do not induce users to "become addicted, spend large amounts or indulge in activities that go against good public customs."



Quote for the day:

"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford