Daily Tech Digest - August 21, 2022

Using AI to Automate, Orchestrate, and Accelerate Fraud Prevention

Traditional approaches to fraud prevention and response no longer measure up. First of all, they’re reactive, rather than proactive, focused on damage that’s already taken place rather than anticipating, and potentially preventing, the threats of the future. The limitations of this approach play out in commercial off-the-shelf tools that organizations can’t easily modify to new developments in the landscape. Even the most cutting-edge AI solutions may be limited in detecting new types of fraud schemes, having only been trained on known categories. Secondly, today’s siloed operations impede progress. Cybersecurity teams and fraud teams, the two groups on the frontlines of the fight, too often work with different tools, workflows, and intelligence sources. These silos extend across the various stages of the fraud-fighting lifecycle: threat hunting, monitoring, analysis, investigation, response, and more. Individual tools address only discrete parts of the process, rather than the full continuum, leaving much to fall within the gaps. When one team notices something suspicious, the full organization might not know about the threat and act upon it until it’s too late.


Fundamentals of AI Ethics

One of the biggest challenges in AI, bias can stem from several sources. The data used for training AI models might reflect real societal inequalities, or the AI developers themselves might have conscious or unconscious feelings about gender, race, age, and more that can wind up in ML algorithms. Discriminatory decisions can ensue, such as when Amazon’s recruiting software penalized applications that included the word “women,” or when a health care risk prediction algorithm exhibited a racial bias that affected 200 million hospital patients. To combat AI bias, AI-powered enterprises are incorporating bias-detecting features into AI programming, investing in bias research, and making efforts to ensure that the training data used for AI and the teams that develop it are diverse. Gartner predicts that by 2023, “all personnel hired for AI development and training work will have to demonstrate expertise in responsible AI.” Continually monitoring, analyzing, and improving ML algorithms using a human-in-the-loop (HITL) approach – where humans and machines work together, rather than separately – can also help reduce AI bias. 


10 nonfunctional requirements to consider in your enterprise architecture

Scalability refers to the systems' ability to perform and operate as the number of users or requests increases. It is achievable with horizontal or vertical scaling of the machine or attaching AutoScalingGroup capabilities. Here are three areas to consider when architecting scalability into your system:Traffic pattern: Understand the system's traffic pattern. It's not cost-efficient to spawn as many machines as possible due to underutilization. Here are three sample patterns:Diurnal: Traffic increases in the morning and decreases in the evening for a particular region. Global/regional: Heavy usage of the application in a particular region. Thundering herd: Many users request resources, but only a few machines are available to serve the burst of traffic. This could occur during peak times or in densely populated areas. Elasticity: This relates to the ability to quickly spawn a few machines to handle the burst of traffic and gracefully shrink when the demand is reduced. Latency: This is the system's ability to serve a request as quickly as possible. 

When we might meet the first intelligent machines

A few weeks later, Yann LeCun, the chief scientist at Meta’s artificial intelligence (AI) Lab and winner of the 2018 Turing Award, released a paper titled “A Path Towards Autonomous Machine Intelligence.” He shares in the paper an architecture that goes beyond consciousness and sentience to propose a pathway to programming an AI with the ability to reason and plan like humans. Researchers call this artificial general intelligence or AGI. I think we will come to regard LeCun’s paper with the same reverence that we reserve today for Alan Turing’s 1936 paper that described the architecture for the modern digital computer. Here’s why. ... LeCun’s first breakthrough is in imagining a way past the limitations of today’s specialized AIs with his concept of a “world model.” This is made possible in part by the invention of a hierarchical architecture for predictive models that learn to represent the world at multiple levels of abstraction and over multiple time scales. With this world model, we can predict possible future states by simulating action sequences. In the paper, he notes, “This may enable reasoning by analogy, by applying the model configured for one situation to another situation.”


Why DevOps Governance is Crucial to Enable Developer Velocity

One key takeaway from all this: consolidation of application descriptors enables efficiencies via modularization and reuse of tested and proven elements. This way the DevOps team can respond quickly to the dev team needs in a way that is scalable and repeatable. Some potential anti-patterns include: Developers are throwing their application environment change needs over the fence via the ticketing system to the DevOps team causing the relationship to worsen. Leaders should implement safeguards to detect this scenario in advance and then consider the appropriate response. An infrastructure control plane, in many cases, can provide the capabilities to discover and subsume the underlying IaC files and detect any code drift between the environments. Automating this process can alleviate much of the friction between developers and DevOps teams. Developers are taking things into their own hands resulting in an increased number of changes in local IaC files and an associated loss of control. Mistakes happen, things stop working, and finger pointing ensues. 


The Role of ML and AI in DevOps Transformation

DevOps is changing fundamentally as a result of AI and ML. Change in security is most notable because it acknowledges the need for complete protection that is intelligent by design (DevSecOps). Many of us believe that shortening the software development life cycle is the next critical step in the process of ensuring the secure delivery of integrated systems via Continuous Integration & Continuous Delivery (CI/CD). DevOps is a business-driven method for delivering software, and AI is the technology that may be integrated into the system for improved functioning; they are mutually dependent. With AI, DevOps teams can test, code, release, and monitor software more effectively. Additionally, AI can enhance automation, swiftly locate and fix problems, and enhance teamwork. AI has the potential to increase DevOps productivity significantly. It can improve performance by facilitating rapid development and operation cycles and providing an engaging user experience for these features. Machine Learning technologies can make data collection from multiple DevOps system components simpler.


Data Lakes Are Dead: Evolving Your Company’s Data Architecture

Changing your data architecture starts with recognizing that the process spans beyond IT – it’s a company-wide shift. Data literacy and culture are fundamental components of launching or changing data architecture. This shift begins with defining your business goals and value chain. What business problem do you want to solve, and how can your data be optimized to accomplish that goal? Different data architecture offers diverse possibilities for conducting analytics, none of which are inherently better than another. Having a company-wide understanding of where you are and where you’re going helps guide what you should be getting out of your data and what architecture would best serve those needs at each level of your organization. Once you’ve identified how to manage your data better to serve your organization, you need to establish overarching data governance. Again, data governance is not a set of procedures for IT, but a company-wide culture. An impactful data culture involves a carefully curated ecosystem of roles, responsibilities, tools, systems, and procedures. 


7 benefits of using design review in your agile architecture practices

The things involved in a design review include:The designer is the person who wants to solve a problem. The documentation is the document at the center of attention. It contains information regarding all aspects of the problem and the proposed solution. The reviewer is the person who will review the documentation. The process includes the agreed-upon rules and interactions that define the designer's and reviewer's communications. It may stand alone or be part of a bigger process. For example, in a software development life cycle, it could precede development, or in an API specification, it could include evaluating changes. The review scope is the area the reviewer tries to cover when reviewing the documentation (technical or not). ... Design review has clear value that far outweighs the overhead it introduces, much like code review does in software releases. Organizations should consider it part of their governance model in conjunction with other tools and practices, including architecture review boards. 


Enterprise Architecture Governance – Why It Is Important

The Enterprise Architecture organization helps to develop and enable the adoption of design, review, execution and governance capabilities around EA. EA guidance and governance over the Enterprise IT solutions delivery processes focused on realizing a number of solutions characteristics. These include:Standardization: Development and promotion of enterprise-wide IT standards. Consistency: Enable required levels of information, process and applications integration and interoperability. Reuse: Strategies and enabling capabilities that enable reuse and advantage of IT assets at the design, implementation and portfolio levels. This could include both process/governance and asset repository considerations. Quality: Delivering solutions that meet business functional and technical requirements, with a lifecycle management process that ensure solutions quality. Cost-effectiveness and efficiency: Enabling consistent advantage of standards, reuse and quality through repeatable decision governance processes enabling reduced levels of total solutions lifecycle cost, and enabling better realization on IT investments.


How Blockchain Checks Financial Frauds within Companies

Blockchains are made to be resistant to data modification by design. A blockchain can effectively function as an open, distributed ledger that can efficiently and permanently record transactions between two parties. Blockchain can also be used to verify transactions that have been reported. Using the technology, auditors could simply confirm the transactions on readily accessible blockchain ledgers rather than requesting bank statements from clients or contacting third parties for confirmation. The blockchain technology achieves this immutability by matching cryptography with blockchain. Each transaction that the blockchain network deems valid is time-stamped, embedded into a ‘block’ of data, and cryptographically secured by a hashing operation that links to and integrates the hash of the previous block. This new transaction then joins the chain as the following chronological update. Meta-data from the hash output of the previous block is always incorporated into the hashing process of a new block. 



Quote for the day:

"Leaders make decisions that create the future they desire." -- Mike Murdock

Daily Tech Digest - August 20, 2022

AI & Synthetic Data's Analysis Of Human Movement

One of the special applications of AI poses estimation, a computer vision approach that aids in determining the position and orientation of the human body from an image of a person. It can be utilized, for instance, in markerless motion capture, worker position analysis, and avatar animation for virtual reality. It is required to take numerous pictures of the human actor and its surrounding environment to properly analyze posture. The joints of the human actor are then identified in these photos using a trained convolutional neural network. AI-based fitness apps typically take advantage of the camera on the device to record films up to 720p and 60fps to capture more frames while an exercise is being performed. The issue is that when utilizing a method like a posture estimation, computer vision experts require enormous volumes of visual data to train AI for fitness assessments. Data involving humans engaging in many types of exercise and interacting with several items is quite complicated. To prevent bias, the data must also have high variance and be sufficiently broad.


Why Vulnerability May Be a Leader's Greatest Strength

As leaders, we owe it to our teams to admit when we make a mistake, but it takes vulnerability to admit that we can be wrong. For example, imagine someone recommended a change that I turned it down but later recognized as the right move. There is value in providing an explanation of what made me go in that direction, but ultimately, I need to take responsibility for being wrong. People respect it when others, especially those in leadership, demonstrate the vulnerability it takes to acknowledge they, too, are only human. Leadership vulnerability drives the courage to innovate and trust among team members, with benefits that ripple into their engagement, satisfaction and retention. Mistakes happen, but a leader who pretends to be perfect and expects perfection ends up with a team too frightened to come clean about their mistakes. They either avoid admitting when they make them or avoid the risk of making them altogether, holding back creativity, innovation and new ideas. 


Google Patches Chrome’s Fifth Zero-Day of the Year

“Publicizing details on an actively exploited zero-day vulnerability just as a patch becomes available could have dire consequences, because it takes time to roll out security updates to vulnerable systems and attackers are champing at the bit to exploit these types of flaws,” observed Satnam Narang, senior staff research engineer at cybersecurity firm Tenable, in an email to Threatpost. Holding back info is also sound given that other Linux distributions and browsers, such as Microsoft Edge, also include code based on Google’s Chromium Project. These all could be affected if an exploit for a vulnerability is released, he said. “It is extremely valuable for defenders to have that buffer,” Narang added. While the majority of the fixes in the update are for vulnerabilities rated as high or medium risk, Google did patch a critical bug tracked as CVE-2022-2852, a use-after-free issue in FedCM reported by Sergei Glazunov of Google Project Zero on Aug. 8. FedCM—short for the Federated Credential Management API–provides a use-case-specific abstraction for federated identity flows on the web, according to Google.


CyberArk Channel Chief: Huge Amount Of Momentum Around SaaS

“We have a huge amount of momentum with our partners around SaaS,” Moore said in an interview with CRN, a week after CyberArk announced impressive second-quarter general revenues and subscription revenues tied to its new products and SasS strategies. CyberArk, with headquarters in Newton, Mass. and Petach Tikva, Israel, is now about halfway through its 36-month-long global channel transformation that includes a new emphasis on SaaS and subscriptions, said Moore, who joined CyberArk two years ago as its senior vice president of global channels. “Our channel partners love SaaS and love subscriptions, for all the reasons we love SaaS and subscriptions,” he said. ... In particular, he said he likes the fact that CyberArk is now providing earlier access to its new technologies and resources, giving his firm more time to convince customers about the pluses of CyberArk’s offerings. “It’s been nothing but positive,” he said of Optiv’s partnership with CyberArk.


Data Science Vs. Machine Learning: What’s The Difference?

Machine learning is a subset of data science that applies algorithms to make predictions about future events from data. Data scientists use machine learning to find patterns in data, make predictions, and improve the accuracy of future predictions. Data science is a broader field that includes techniques like predictive modeling, feature engineering, and data analysis. It involves understanding how data can be used to improve business outcomes. Data scientists use machine learning to analyze and understand data sets, making predictions about the relationships between variables. Some key differences between the two fields include: Machine learning is a probabilistic approach that uses algorithms to learn from data; Data science is focused on understanding and extracting knowledge from data. Machine learning is focused on making automated decisions using data; Machine learning is often used to solve problems where there is a lot of historical data, while data science is used more for situations where there is not as much historical data; and Data scientists often profoundly understand the problem they are trying to solve and use that understanding to develop machine learning models.


How SaaS transforms software development

SaaS applications end the fear of delivering an unknown, showstopper bug to customers, without any way to fix it for weeks or months. The days of delivering a patch to an installed product have gone by the wayside. Instead, if a catastrophic bug does wend its way through the development pipeline and into production, you can know about it as soon as it strikes. You can take immediate action—roll back to a known good state or flip off a feature flag—practically before any of your customers even notice. Often, you can fix the bug and deploy the fix in a matter of minutes instead of months. And it’s not just bugs. You no longer have to hold new features as “inventory,” waiting for the next major release. It used to be that if you built a new feature in the first few weeks after a major release, that feature would have to wait potentially months before being made available to customers. Now, a SaaS application can deliver a new feature immediately to customers whenever the team says it is ready.


Welcome To 2032: A Merged Physical/Digital World

We are starting to evolve beyond classical computing into a new data era called quantum computing. It is envisioned that quantum computing will accelerate us into the future by impacting the landscape of artificial intelligence and data analytics. The quantum computing power and speed will help us solve some of the biggest and most complex challenges we face as humans. ... Science is already making great advances in brain/computer interface. This may include neuromorphic chips and brain mapping. Brain-computer interfaces are formed via emerging assistive devices that have implantable sensors that record electrical signals in the brain and use those signals to drive external devices. Eventually these nano-chips may be implanted into our brains, artificially augmenting human thought and reasoning capabilities, and we may be able to upload intelligent data and cognitive resources to our brains by 2032. ... The areas of health and medicine will witness a profound growth of technological innovation by 2032. Numerous breakthroughs in genomics anti-aging therapies will extend our longevity and quality of life.


Patch Now: 2 Apple Zero-Days Exploited in Wild

Security researchers are urging users of Apple Mac, iPhone, and iPad devices to immediately update to newly released versions of the operating systems for each technology, to mitigate risk from two critical vulnerabilities in them that attackers are actively exploiting. The zero-day flaws allow threat actors to take complete control of affected devices. They impact users of iPhone 6s and later, all models of iPad Pro, iPod touch (7th generation), iPad Ai2 and later, iPad 5th generation and later, and iPad mini 4 and later. Also affected are users with Macs running macOS Monterey, macOS Big Sur, and macOS Catalina. Apple disclosed the vulnerabilities and the updates addressing them on Wednesday. One of the zero-days (CVE-2022-32893) exists in WebKit, Apple's browser engine for Safari and for all iOS and iPadOS Web browsers. Apple described the flaw as tied to an out-of-bounds write issue that attackers could use to remotely take control of vulnerable devices.


A critical review of the EU’s ‘Ethics Guidelines for Trustworthy AI’

The majority of AI models in production today are “black box” systems that, by the very nature of their architecture, produce outputs using far too many steps of abstraction, deduction, or conflation for a human to parse. In other words, a given AI system might use billions of different parameters to produce an output. In order to understand why it produced that particular outcome instead of a different one, we’d have to review each of those parameters step-by-step so that we could come to the exact same conclusion as the machine. A solution: the EU should adopt a strict policy preventing the deployment of opaque or black box artificial intelligence systems that produce outputs that could affect human outcomes unless a designated human authority can be held fully accountable for unintended negative outcomes. ... There’s currently no political consensus as to who’s responsible when AI goes wrong. If the EU’s airport facial recognition systems, for example, mistakenly identify a passenger and the resulting inquiry causes them financial harm or unnecessary mental anguish, there’s nobody who can be held responsible for the mistake.


Chipping Away at the Monolith: Applying MVPs and MVAs to Legacy Applications

Organizations are sometimes tempted to do extra technical work, to modernize, or reduce their technical debt because, as they may rationalize, "we’re going to be working on that part of the application anyway, so we should clean things up while we are there." While well-intentioned, this is almost always a bad decision that results in unnecessary cost and delay because once started, it’s very hard to decide to stop. This is where the concept of the MVA pays dividends: it gives everyone a way to decide what changes must be made, and which changes should not be made, at least not yet. If a change is necessary to deliver the desired customer outcome for a release, then it’s part of the MVA, otherwise, it’s out. Sometimes, a team may look at the changes needed to an application and decide, considering the state of the code, that a complete rewrite is in order. The MVA concept, applied to legacy applications, helps to temper that by questioning whether the changes are really necessary to produce the incremental improvements in customer outcomes that are desired.



Quote for the day:

"The art of communication is the language of leadership." -- James Humes

Daily Tech Digest - August 19, 2022

As businesses embrace fully-remote work, does company culture suffer?

Companies that still want to move to a fully remote workplace should consider taking specific actions before doing so, according to Frana. Organizations should:Find out how your staff feels about remote work. Send out a survey to see which employees would want to work from home. Based on those results, you can determine the level of flexibility your company might want to offer. Make sure management is on board. One of the top factors in a remote work policy’s success is how managers feel about it. Explain the benefits of remote work, such as significant savings, the ability to attract and retain top talent from anywhere in the world, and increased productivity. Be intentional about company culture. One of the biggest challenges faced by remote teams is maintaining a strong company culture. In addition to thoughtfully evaluating your current workforce and deciphering what an effective remote-friendly business model looks like, it’s imperative company leaders and managers act with intention and prioritize culture.


Creating A Culture Of Cybersecurity

Businesses need to help their employees learn how to do things differently and train them to think of security as a business priority. Researchers have found that our working memory capacity is between three and five ‘chunks’ of information. This number starts to decline in our 30s, so a safe working figure is probably four chunks of information that the majority of your employees are able to keep in their short-term memory at any point. What does this mean for security? Basically, we need to keep things simple and easy to remember. Factsheets and training days may have their place, but on their own they’re not enough. Consider instead a strategy that uses a combination of continual awareness testing and roleplaying worse-case scenarios, to make security something that’s embedded as a mindset. ... CoEs act as sparring partners, allowing businesses to test solutions and assumptions around products, services and solutions. CoPs take this work to a larger audience, allowing employees to form communities to keep them up to date on the latest threats and remind them about their responsibility in keeping the network safe.


How Not to Waste Money on Cybersecurity

A common way enterprises waste money on IT security is by configuring their security plans and budgets based on the latest cybersecurity trends and following what other organizations are doing. “Each organization's security needs will differ based on their line of business, culture, people, policies, and goals,” says Ahmad Zoua, director of network IT and infrastructure at Guidepost Solutions, a security, investigations, and compliance firm. “What could be an essential security measure to one organization may have little value to another.” Poor planning and coordination can lead to needless duplication and redundancy. “In large organizations, we frequently see many products and platforms that have the same or similar capabilities,” says Doug Saylors, cybersecurity co-leader for technology research and advisory firm ISG. “This is typically the result of a lack of a cohesive cybersecurity strategy across IT functions and a disconnect with the business.” Organizations often layer security products on top of each other year after year.


An Experiment Showed that the Military Must Change Its Cybersecurity Approach

Weis says the Pentagon needs to measure its networks’ suitability for combat the same way it does for soldiers, sailors, tanks, and ships: through the concept of military readiness. Such an approach would mean prioritizing the biggest problems first, with second-tier or complicated ones set on slower paths to fixing. “There's 'ready to fight tonight.' But if you are a carrier strike group and you're deploying in three months, are you on a path to being ready? You manage your readiness on a day-to-day basis and it's a function of a whole bunch of things,” he said. “Do we have the right people? Are they trained? Are they qualified, or deficient? Do we have the equipment?” But Weis had to show that getting to a state of “readiness” in cyberspace is a matter of constant testing and drilling, not filling out compliance forms. He needed a safe space where he could understand readiness without exposing huge problems to adversaries or taking essential naval networks offline. He went to the Naval Postgraduate School, or NPS, in Monterey, California.


Bumpers in the bowling alley: the value of effective data management

According to John Peluso, chief product officer at AvePoint, a layered approach to security is an important way for businesses to achieve this goal. “The most direct thing that we have seen customers find value in – especially in the case of a malware event like ransomware – is the ability to access data,” he says. “The way to achieve this is by having a reliable business continuity strategy. “This becomes more difficult when you consider the data that is stored on someone else’s architecture – such as server content, cloud services, or anything with a synchronisation capability – is less covered by traditional enterprise data protection strategies. That’s new territory. While many businesses may think that because they have outsourced the architecture, they've also outsourced the responsibility, in some cases they haven’t. Businesses are becoming increasingly reliant on cloud services, so they need to be factored into the overall business continuity and resilience strategy.” This reliance on cloud services has, in some ways, been driven by the swift move to hybrid and remote working.


Feds Urge Healthcare Entities to Address Cloud Security

Most major healthcare organizations have become increasing dependent on cloud-based services, says John Houston, vice president of privacy and information security and associate counsel of integrated healthcare delivery organizations at the University of Pittsburgh Medical Center, which includes 40 hospitals and 800 outpatient sites. This reliance is in large part due to many IT vendors moving their services "exclusively to the cloud," he tells Information Security Media Group. "As such, ensuring the security and availability of cloud-based services - and associated information - is and will remain one of UPMC's top priorities. "Unfortunately, such assurance can be problematic for a variety of reasons, most notably being able to accurately assess the cloud vendor’s security posture. Further, getting meaningful contractual commitments is difficult - including financial coverage in the event of a breach," Houston says. Benjamin Denkers, chief innovation officer at privacy and security consulting firm CynergisTek, says he also thinks the biggest threat involving cloud is when organizations are reliant on the third parties and assume the environment is properly secured.


WebOps: A DevOps for Websites, but the Tools Let It Down

From an IT perspective, how is WebOps usually managed? According to Koenig, it depends on what the relationship is between the IT and marketing departments. In some cases, he said, the marketing department “earmarks budget to pay for developers who are technically in IT, but are dedicated to Marketing’s technology needs.” But in other cases, he’s seen “really strong central IT organizations” in which IT takes the lead — and in those cases, they tend to make use of their existing DevOps team and practices. In DevOps, CI/CD is a common part of the workflow. I asked if that’s the case with WebOps too, and if so how does CI/CD work in the web context? For static sites, Koenig replied, testing is done during the build (typically after content is updated). “The more challenging case is where people have content management,” he said, “so you have a living piece of software that’s running your live website, and that is connected to a database, it’s got some binary assets, images, PDFs, what have you. So you have people using that in production to post new content [but] you also want to be able to make design changes and add functionality.”


Why Are Robots So Important To Farmers?

Robots have revolutionized agriculture in recent years by increasing crop yields, decreasing labor costs, and simplifying the process of harvesting crops. The widespread use of robots in farming can be attributed to their ability to perform tasks that are either difficult or impossible for humans to do, such as moving around in tight spaces or reaching high up into plants. As a result of their increased efficiency and versatility, robots have become an essential part of modern agriculture. They are used to plant, harvest, package, and transport crops. They can also detect and avoid obstacles while performing tasks, significantly reducing the chances of human injury or equipment failure. In addition, robots are often equipped with sensors that allow them to gather information about crops and environmental conditions to optimize operations. Many plants are also resistant to insect damage or diseases, so robots may be used to control the insects or pathogens that often affect crops. Robots are also used in areas where humans cannot or would not wish to work, such as space exploration and deep-sea operations.


Five ways augmented analytics is protecting business revenue

Making sure the right person has the right information, at the right time, can be critical to a business. Suppose, for example, there’s an error in your app that prevents users in a particular country from logging in. Initially it may be just a drop in the ocean in terms of the company’s customer base, but over time it could result in user churn and a loss in revenue. Augmented analytics is able to identify such a problem early on from a minimal number of failed attempts and immediately flag it for the person who can fix it. This avoids lag time and sending messages to the wrong department, which are often overlooked by someone who misses its significance. Augmented analytics means potential revenue leaks can be plugged fast, and that means losses can be minimised. ... Keeping a customer satisfied is never easy. Human behaviour is hard enough to predict at the best of times. But augmented analytics can transform the way companies find and fix issues that are turning customers off. The technology identifies “hidden” trends, patterns and anomalies and alerts organisations faster than those anomalies would otherwise appear on traditional dashboards.


How Google Cloud blocked the largest Layer 7 DDoS attack at 46 million rps

The attack was stopped at the edge of Google’s network, with the malicious requests blocked upstream from the customer’s application. Before the attack started, the customer had already configured Adaptive Protection in their relevant Cloud Armor security policy to learn and establish a baseline model of the normal traffic patterns for their service. As a result, Adaptive Protection was able to detect the DDoS attack early in its life cycle, analyze its incoming traffic, and generate an alert with a recommended protective rule–all before the attack ramped up. The customer acted on the alert by deploying the recommended rule leveraging Cloud Armor’s recently launched rate limiting capability to throttle the attack traffic. They chose the ‘throttle’ action over a ‘deny’ action in order to reduce chance of impact on legitimate traffic while severely limiting the attack capability by dropping most of the attack volume at Google’s network edge. Before deploying the rule in enforcement mode, it was first deployed in preview mode, which enabled the customer to validate that only the unwelcome traffic would be denied while legitimate users could continue accessing the service. 



Quote for the day:

"The final test of a leader is that he leaves behind him in other men, the conviction and the will to carry on." -- Walter Lippmann

Daily Tech Digest - August 18, 2022

How Productivity And Surveillance Technology Can Create A Crisis For Businesses

“The use of productivity and surveillance technology can create crisis situations for companies and organizations due to the fact that they are not always clear on what they are getting into,” according to Jeff Colt, founder and CEO of Aquarium Fish City, an aquarium and aquatic website. “Companies oftentimes do not fully understand the ramifications of using these tools. For example, if a company decides to implement surveillance technology in the workplace, it needs to make sure that it is not violating any laws. Additionally, it needs to make sure that it is not infringing on any employee rights or privacy rights,” he said in a statement. “The use of productivity and surveillance technology can also create crisis situations because some people may not be comfortable with being monitored by their employers. This could lead some employees to feel like they are being treated unfairly as well as causing them to quit their jobs altogether,” Colt noted. ... “The use of these technologies can have the opposite intended effect when not managed properly,” Natalia Morozova, managing partner at Cohen, Tucker & Ades, an immigration law firm.


The benefits of regenerative architecture and unlocking the data potential in buildings

Regenerative architecture is “architecture that focuses on conservation and performance through a focused reduction on the environmental impacts of a building.” It can allow buildings to generate their own electricity and provides structures to sell excess energy back to the grid, creating a comprehensive, self-sustaining prosumer architecture. By producing their own energy through solar and wind turbines, these buildings significantly lower their carbon emissions and have more resilience in the face of extreme weather events. Some can even reverse environmental damage. But to fully leverage these opportunities, building owners and facility managers need smarter control of their energy. The right data, insights, and control help to make fast decisions and act on them. This is possible through the power digitalization of buildings. Buildings are responsible for 40% of the world’s CO2 emissions, second only to manufacturing. Yet, 30% of energy in buildings is wasted, often heating, cooling, and lighting empty spaces.


Quantum Physics Could Finally Explain Consciousness, Scientists Say

The existence of free will as an element of consciousness also seems to be a deeply non-deterministic concept. Recall that in mathematics, computer science, and physics, deterministic functions or systems involve no randomness in the future state of the system; in other words, a deterministic function will always yield the same results if you give it the same inputs. Meanwhile, a nondeterministic function or system will give you different results every time, even if you provide the same input values. “I think that’s why cognitive sciences are looking toward quantum mechanics. In quantum mechanics, there is room for chance,” Danielsson tells Popular Mechanics. “Consciousness is a phenomenon associated with free will and free will makes use of the freedom that quantum mechanics supposedly provides.” However, Jeffrey Barrett, chancellor’s professor of logic and philosophy of science at the University of California, Irvine, thinks the connection is somewhat arbitrary from the cognitive science side.


Eclypsium calls out Microsoft over bootloader security woes

The malicious shell activity involves visual elements that could potentially be detected by users on workstation monitors during the boot process; however, the vulnerabilities are especially dangerous for servers and industrial control systems that lack displays. The third vulnerability, CVE-2022-34302, is even harder to detect, as exploitation would remain virtually invisible to system owners. The researchers discovered that the New Horizon DataSys bootloader contains a small file that acts as a built-in bypass for Secure Boot; the 73 KB file disables the Secure Boot check without turning the protocol off completely, and it also has the ability to execute additional bypasses for security handlers. The discovery of the Horizon DataSys built-in Secure Boot bypass was definitely a "holy crap moment," Shkatov told SearchSecurity. The researchers said admin access is required for full exploitation, but they demonstrated an exploit during the presentation that used a phishing email and a malicious Word document that elevated their privileges to admin. 


Things You Should Know About Artificial Intelligence and Design

Nearly anyone who lives in the modern world produces data, often on the order of terabytes per day. We text our friends, stream videos, use fitness apps, ask Siri about the weather while we look out the window, walk by CCTV cameras, and the list goes on. Most of these data are unstructured, i.e. not organized in any clear order. Machine learning provides a way for computers to glean meaning from this lack of structure. As Armstrong puts it, “even now as you read, computers sift and categorize your data trails—both unstructured and structured — plunging deeper into who you are and what makes you tick.” How does it do this? The short answer is algorithms, statistical analysis, and prediction. Not sure what any of those words mean? ... As a researcher dedicated to demystifying emerging technology for landscape architects, I believe it is vital we get designers of all demographics and digital abilities to a shared understanding of what AI is so we can all better facilitate its continued permeation into practice. Big Data. Big Design. does this is in spades.


The effect of digital transformation on the CIO job

The CIO has always been a super-important role. I'd liken it [in the past] to the role of a flight engineer. You can't take off if the flight engineer is not on board; he or she serves a super-important purpose – it's mission critical, it's a lights-on operation. It's about delivering a really important capability: to keep the engine, the plane running, in this case, the enterprise running. We're seeing a big change happen because with digital transformation -- and using technology to deliver a new business value proposition -- the world is now starting to center around digital. And the role of the CIO is changing because he or she's now more and more becoming the pilot or the co-pilot, helping colleagues and their stakeholders and the rest of the executive committee to really reimagine the business value proposition on the back of new technology. And so that's one big change that we're going through because the [CIO] seat at the table, the role of the individual, is completely changing. I think another thing that's happening is that tech is no longer the long pole in the tent. And what I mean by that is when you do digital transformation, it isn't just the tech, it's the data. 


How Can Clinical Trials Benefit From Natural language processing (NLP)?

NLP can help identify patterns in participant responses that may indicate whether a treatment is effective. This information can improve the accuracy of trial results and make better decisions about which treatments to pursue. In addition, NLP can help researchers understand why certain participants respond well or poorly to a cure. This knowledge can help develop more effective treatments in the future. Several different NLP tools can be used in clinical trials. The most commonly used tools include machine learning algorithms, text mining techniques, and Word2Vec models. Each has advantages and disadvantages. Therefore, it’s crucial to pick the appropriate equipment for the job. Fortunately, many software platforms provide pre-built libraries that make it easy to use NLP in your research projects. Natural language processing (NLP) has significantly impacted clinical trials by helping researchers identify patterns in participant feedback. This has allowed for more informed decisions about modifying or improving treatments. 


New neuromorphic chip for AI on the edge, at a small fraction of the energy and size of today's computing platforms

The key to NeuRRAM's energy efficiency is an innovative method to sense output in memory. Conventional approaches use voltage as input and measure current as the result. But this leads to the need for more complex and more power hungry circuits. In NeuRRAM, the team engineered a neuron circuit that senses voltage and performs analog-to-digital conversion in an energy efficient manner. This voltage-mode sensing can activate all the rows and all the columns of an RRAM array in a single computing cycle, allowing higher parallelism. In the NeuRRAM architecture, CMOS neuron circuits are physically interleaved with RRAM weights. It differs from conventional designs where CMOS circuits are typically on the peripheral of RRAM weights.The neuron's connections with the RRAM array can be configured to serve as either input or output of the neuron. This allows neural network inference in various data flow directions without incurring overheads in area or power consumption. This in turn makes the architecture easier to reconfigure.


Monoliths to Microservices: 4 Modernization Best Practices

Surveys have shown that the days of manually analyzing a monolith using sticky notes on whiteboards take too long, cost too much and rarely end in success. Which architect or developer in your team has the time and ability to stop what they’re doing to review millions of lines of code and tens of thousands of classes by hand? Large monolithic applications need an automated, data-driven way to identify potential service boundaries. ... When everything was in the monolith, your visibility was somewhat limited. If you’re able to expose the suggested service boundaries, you can begin to make decisions and test design concepts — for example, identifying overlapping functionality in multiple services. ... We all know that naming things is hard. When dealing with monolithic services, we can really only use the class names to figure out what is going on. With this information alone, it’s difficult to accurately identify which classes and functionality may belong to a particular domain. ... What qualities suggest that functionality previously contained in a monolith deserves to be a microservice?


PC store told it can't claim full cyber-crime insurance after social-engineering attack

According to Chief District Judge Patrick Schiltz, who handed down the order, this case treads somewhat new legal ground. In the opinion, Schiltz noted that both SJ's lawsuit and Travelers' dismissal motion only cite three other cases, all from different jurisdictions, that "analyze the concept of direct causation in the context of computer or social-engineering fraud." All of those cases had a major difference in common, the court pointed out – none of them involved insurance policies that cover both computer and social engineering fraud, or make clear that the two types of fraud are different, mutually exclusive categories. This case, therefore, is less of a litmus test for the future of legal disagreements around social engineering insurance payouts, and more an examination of a close reading of contracts. "[Travelers'] Policy clearly anticipates – and clearly addresses – precisely the situation that gave rise to SJ Computers' loss, and the Policy bends over backwards to make clear that this situation involves social-engineering fraud, not computer fraud," Schiltz said.



Quote for the day:

"People only bring up your past when they are intimidated by your present." -- Joubert Botha

Daily Tech Digest - August 17, 2022

The second age of foundational technologies

We’re being overwhelmed by a tsunami of new foundational technology. Artificial intelligence (AI) is allowing computer systems to learn and solve problems that humans can’t. CRISPR is letting scientists edit genes and program DNA. Blockchain has brought new ways to think about money, contracts, and identity. The list of paradigm-shifting innovations goes on, and includes 3D printing, virtual reality, the metaverse, and civilian space flight. ... “When a technological revolution irrupts in the scene, it does not just add some dynamic new industries to the previous production structure. It provides the means for modernizing all the existing industries and activities.” Let that sink in for a minute. We are in the midst of “modernizing all the existing industries and activities.” That means enormous, wrenching, society-overhauling change. We see it all around us. Part of society is racing ahead with cryptocurrencies, social media, AI, and on and on—while others fight to hold on to a way of life they’ve always known. So, divides widen in society and politics, and between rich and poor, and rising and falling nations.


The IT Leader’s Guide to Helping Developers Avoid Burnout

In this new era of work, it's imperative for team members – from the CEO down – to have the ability to "read the virtual room" and have an understanding of what developers are thinking and feeling based on the tone and content of online interactions and conversations. Whether it’s Slack, Zoom, Teams or any other collaboration tool, it’s not the same as communicating face-to-face with someone who’s literally sitting at the same table. It’s possible to teach leaders the skills necessary to manage effectively in this environment, but we’re also seeing a rise of new and emerging leaders that are thriving because they place a priority on empathy and personal connections, even when most of the communication that takes place with their team members is digital. Paying attention to online social cues can help leaders determine if and when team members are stretching themselves too thin. Make no mistake, modern communication tools have helped make work more productive and efficient. But the best leaders are those who are able to analyze behavior on these tools so they can offer team members support when it’s needed most.


Edge computing: 4 key security issues for CIOs to prioritize

“Edge computing can create more complexity, and this can make securing the entire system more difficult,” says Jeremy Linden, the senior director of product management at Asimily. “Still, there is nothing inherently less secure about edge computing.” The big edge security risks should sound familiar – compromised credentials, malware and other malicious code, DDoS attacks, and so forth. What’s different is that these risks are now occurring farther and farther away from your primary or central environment(s) – the traditional network perimeter of yore is no longer your only concern. “Edge computing poses unique security challenges since you’re moving away from walled garden central cloud environments and everything is now accessible over the Internet,” says Priya Rajagopal, director, product management, Couchbase. The good news: Many of the same or similar tactics and tools organizations use to secure their cloud (especially hybrid cloud and/or multi-cloud) and on-premises environments still apply – they just need to be applied out at the edge.


Beyond Data Democracy: Why a Shift to Data Stewardship is Essential for Leadership Success

“Data democracy” has been heralded as the answer to this rapid cycle of innovation—but it is not enough. These initiatives have noble intentions: Sharing data and information about how users interact with products widely should, in theory, help groups across the business—from marketing to IT—operate from the same source of truth to stimulate better insights and better results faster. In reality, however, data democracy fails to yield those conclusive answers and shared goals. Too much raw data is difficult and time-consuming for teams to interpret, especially as the flow of digital signals has surged, and lacks the context needed to draw conclusions about the best path forward. Instead, the data is so oppressively overwhelming to manage that departments either give up or derive inaccurate conclusions—neither of which helps drive sound decisions and productive partnerships. Rather, these conditions create a new source of frustration and inefficiency for many engineering teams: the entire organization has access to information ripe for misinterpretation, even as expectations for results grow more urgent.


Microsoft Disrupts Russian Group's Multiyear Cyber-Espionage Campaign

Microsoft said its researchers have observed Seaborgium using stolen credentials to directly log in to victims' email accounts and steal their emails and attachments. In a few instances, the threat actor has also been observed configuring victim email accounts to forward emails to attacker-controlled addresses. "There have been several cases where Seaborgium has been observed using their impersonation accounts to facilitate dialogue with specific people of interest and, as a result, were included in conversations, sometimes unwittingly, involving multiple parties," ... As far as the disruption goes, the computing giant has now disabled accounts that Seaborgium actors have been using for victim reconnaissance, phishing, and other malicious activities. This includes multiple LinkedIn accounts. It has also developed detections for phishing domains associated with Seaborgium. F-Secure, which refers to the threat actor as the Callisto Group, has been tracking its activities since 2015. In a 2017 report, the security vendor had described Callisto Group as a sophisticated actor targeting governments, journalists, and think tanks in the EU and parts of eastern Europe.


What is challenging successful DevSecOps adoption?

Although adoption is low for now, the study also confirms potential growth in the industry with 62% of respondents saying their organization is actively evaluating use cases or has plans to implement DevSecOps. “As organizations adopt modern software development processes leveraging cloud platforms, they are looking to incorporate security processes and controls into developer workflows,” said Melinda Marks, senior analyst at ESG. “This research shows DevSecOps can be a game changer for companies, and there is no doubt we will see growing market traction over the next few years.” ... Companies believe that establishing a culture of collaboration and encouraging developers to leverage security best practices are nearly equal in importance to adopting DevSecOps tools. While it is common to anticipate cultural transformation to be a roadblock prior to adoption, those practicing DevSecOps report that technical limitations, such as data capture and analysis, are actually greater barriers to success.


Lawsuit Against FTC Intensifies Location Data Privacy Battle

The alleged dispute between Kochava and the FTC also comes in the wake of an executive order by President Biden in July, following the Supreme Court Roe v. Wade ruling. Among other actions, the executive order directed the FTC to consider options "to address deceptive or fraudulent practices, including online, and protect access to accurate information" (see: Biden Order Seeks to Protect Reproductive Data Privacy). Kochava claims the government is making the company a scapegoat. "The FTC's hope was to get a small, bootstrapped company to agree to a settlement - with the effect of setting precedent across the adtech industry and using that precedent to usurp the established process of Congress creating law. Kochava disagreed with this scheme and asked the federal court in Idaho to intervene," Mariam says. Also, among other allegations, Kochava's lawsuit claims the FTC’s proposed enforcement action would overstep its legal authority related to enforcing the FTC Act. The FTC declined ISMG's request for comment on the Kochava dispute.


IT Job Market Still Strong, But Economic Headwinds Complicate Picture

David Wagner, senior research director with Computer Economics, says despite the economic headwinds, about 60% of companies surveyed in the company's latest report said they were planning to increase headcount -- the largest percentage since the 2008 recession. “We continue to think this is a sign of more IT headcount growth in the next few years,” he explains. “It comes with a small caveat, of course, which is that the economic headwinds have gotten a little stronger over the last couple of months than they were at the beginning of the year.” However, from Wagner's perspective, IT has become so strategically important to every business that particularly when it comes to IT staffing companies are going to be as positive about their staffing and their IT spending as they can be. “It's not a surprise when Google and Microsoft both announced their most recent hiring freezes right around the time they were giving their quarterly earnings,” he says. “I think what's going to happen is there's going to be a pause as companies look around and figure out how bad things are going to be.”


When it comes to changing culture, think small

To change the way people work together, Martin argues, leaders must model the behaviors they want to see. “Literally the only way that I’ve seen culture change in the 42 years since I graduated from business school is when a leader sets out to demonstrate a different kind of behavior and makes that behavior work. Other people take their cues from that behavior, and, slowly but surely, the culture changes,” he says. “Kremlin-watching does not happen only in Moscow—it’s an incredibly powerful force. People watch the leadership and do what the leadership does.” A notable aspect of this approach is that it does not require a major initiative or investment. Instead, the culture change depends on micro-interventions: small adjustments to the structure, dynamics, or framing of interpersonal interactions, applied consistently over time. Martin helped orchestrate this kind of change while working with A.G. Lafley when he was the CEO of Procter & Gamble. Lafley wanted to revamp the consumer giant’s overly bureaucratic strategic process. 


How To Do Data Governance Better

Business initiatives are built on data, and your data governance program needs to support those objectives. For example, your business goal might be better data discovery to make business reporting more easily consumed or findable. You need to understand—and embrace—how data is consumed and used. This drives the core metrics and dashboards for validating data and checking data quality. When you scope out a core purpose or goal you’re trying to achieve in the first few months or quarters, then you won’t get overwhelmed. A data domain represents the logical grouping of data, either by item or area of interest, within an organization. With these high-level categories in place, organizations can assign accountability or responsibility for their data. Decentralized consumption models make it possible for different teams to define categories differently based on domain-level knowledge. They may use different names or metrics for the same data. A shared vocabulary across all departments standardizes how data is being used and accesdata sed, increasing alignment across departments and making use and accountability easier for everyone.



Quote for the day:

"You don't lead by pointing and telling people some place to go. You lead by going to that place and making a case." -- Ken Kesey

Daily Tech Digest - August 16, 2022

What are virtual routers and how can they lead to virtual data centers?

So what can you do with virtual router technology? The number one application, according to enterprises, is virtual networking, especially SD-WAN. All virtual-network technologies build an overlay network that has its own on- and off-ramp elements, which are really access routers. While many vendors offer this technology as appliances, most will also provide virtual routers for hosting on servers. That may make sense in the data center, where there are already racks of servers installed. Using virtual routers means that if one fails because its server went down, another can be easily spun up to take its place. Virtual routers are also essential in many cloud applications. Public cloud providers are understandably unenthusiastic about your sending your techs to install routers in their data centers, but you may need a virtual router there if you want to use virtual networking and SD-WAN optimally. For this type of cloud virtual routing, make sure your virtual router is compatible with the virtual network or SD-WAN technology you’re using.


Overcoming the roadblocks to passwordless authentication

There are a variety of roadblocks associated with moving to passwordless authentication. Foremost is that people hate change. End users push back when you ask them to abandon the familiar password-based login page and go through the rigamarole of registering a factor or device required for typical passwordless flows. Further, the app owners will often resist changing them to support passwordless flows. Overcoming these obstacles can be hard and expensive. It can also be exacerbated by the need to support more than one vendor’s passwordless solution. For example, most passwordless solutions pose app-level integration challenges that require implementing SDKs to support even simple flows. What happens if you want to support more than one solution? Or use your passwordless solution as both a primary identity and authentication provider and a step-up authentication provider? Or you want to layer in behavioral analytics? There is a way to address these human and technical challenges standing in the way of passwordless adoption using orchestration. Although common in virtualized computing stacks, orchestration is a new concept in identity architectures. 


Obsolescence management for IT leaders

Obsolescence will always be a by-product of continuous technological advances. The best way to improve cyber security and reduce downtime risks is to prepare effectively and take proactive steps to manage obsolescence. With a proactive obsolescence management plan in place, such as a cloud-first approach, businesses can track the lifespan of products. This ensures that IT and operational technology are always protected, improving productivity and reducing costs. To plan for the future, mid-size businesses should carry out an assessment of current infrastructure to understand the components of the IT and operational technology landscape and how these systems interact. Vendors will often publish end-of-life dates for hardware and software at least twelve months in advance. IT managers should look at how much they already spend on maintenance and whether downtime has occurred before. Understanding the risks can also help businesses make more informed decisions about their equipment. Businesses should consider how the failure of a hardware or software component will impact operations, costs and reputation, and whether the equipment is compatible with the rest of the system.


The pitfalls of poor data management – and how to avoid them

One of the challenges is how differences in patient profile can drastically change the costs associated with the same procedure. For example, a healthy patient with no comorbidities can likely receive a colonoscopy at an outpatient center. However, a patient with a medical condition such as hemophilia would need that same colonoscopy performed in the more costly hospital setting because of the complications that could potentially arise. This variability makes providing accurate estimates complicated. One way to potentially address this issue is to provide best-case and worst-case estimates. Getting to the point where these estimates can be made in real time, so that a procedure can safely continue when a complication arises without the concern of being fined or not properly reimbursed, is key. Also, while the regulations are well-intended, the reality is it is probably unnecessary to have the specified level of price transparency for every encounter. We need to focus on the most problematic events – those medical episodes that bankrupt people because they had no idea what their out-of-pocket costs would be.


Icelandic datacentres may lead the way to green IT

One of the main application areas where Icelandic datacentres make a lot of sense is in artificial intelligence (AI). With the advancement of AI methodologies such as unsupervised machine learning, for many applications, AI training and inference now needs to occur in the same location – they need to be colocated to facilitate iteration between the two processes. Foundational AI models run for weeks or months to do a re-education, so running a full training data set is very energy intensive. Businesses that depend on AI models do training continuously to get different versions of the models. For example, they might train for a specific customer who has a data set they want trained against. ... A second type of application where Icelandic datacentres make sense is in financial services. Although trading applications require very low latency and are usually placed close to exchanges in edge or metro locations, they depend on the output of larger, more compute intensive applications. These applications use thousands of computers 24 hours a day to run Monte Carlo simulations and Markov Chain analysis to make predictions about market trends. 


Automotive hacking – the cyber risk auto insurers must consider

Cyber exposures are a relatively new frontier for auto insurance. Traditional risk considerations have revolved around liability or theft, but those have evolved amid the increasingly connected landscape for vehicles. “We must evaluate the types of losses happening and what’s causing those losses. Are they related to malfunctions in a vehicle? Are they related to hacking? It’s a challenge for insurers even to determine the ultimate cause of a loss,” said Perfetto. “If there was an accident, and it wasn’t the driver’s fault per se but more of a vehicle malfunction, that may not be easily attributed. If there was a hacking incident, that might not be easy to discover.” ... “We have seen data that supports reduction in accident frequency related to certain technology added to a vehicle. But we have also seen the cost of replacing some more advanced technologies increase. Something as simple as a rear end or a minor dent in your bumper that used to be an easy and relatively inexpensive item to fix has become much more costly,” Perfetto noted.


Are debt financings the new venture round for fintechs startups?

You have to plan ahead for venture debt. Put it in place relatively soon after an equity financing. That way there is no adverse selection for the lenders; everyone (founders, VCs and lenders) around the table is happy at that time. If you try to put something in place with less than six months of cash, you will not be able to get debt. If you put it in place after an equity round, you can draw it down way into the future — that’s called a forward commitment/drawdown. That gives the startup a lot of optionality. It’s super important to understand all the terms. Often, founders don’t realize there are things like funding MACs, investor abandonment clauses, etc. These terms can be used by the lender to block the startup from either drawing down the money or creating a default after the money has been drawn. Either way, the company is in trouble and can’t count on the capital. So you really need to know your lender, have your VCs know your lender and pay attention to your terms. This is why we created the Sample Venture Debt Term Sheet, to explain all the terms.


The cybersecurity skills gap is ‘not just about addressing headcount’

From a security perspective, I’m hoping an increase in connected systems will lead to less human-error-related cyberattacks. This will largely revolve around increasing API accessibility and integration. Not only do better integrations allow for employees to do better, more efficient work, it also enables a more secure infrastructure throughout your entire organisation. For example, when APIs are accessible throughout the application ecosystem, this allows for systems to be configured through code, helping us introduce streamlined changes to configuration rather than having to go into specific applications. From a security perspective, this enables us to do advanced things like segregation of duty and activity monitoring at scale. These benefits are a large part of why we prioritise connectivity and API accessibility at Templafy, both in our own tech stack and our platform. We know it not only benefits our own team, but also our customers.


IT leadership: Why adaptability matters

The rise of technology has incentivized industries to adapt in recent years. Still, that push is becoming a pull as realities like The Great Resignation and remote work push organizations to change how they interact with and relate to their customers and employees. The return on investment of developing adaptability in organizations comes from talent attraction and retention, increased innovation, improved employee engagement – and potentially, organizational survival. In the past, leaders have been able to draw from models such as William Bridges’ Transitions to understand adaptability. But while these approaches may help us to understand how a person adapts and what behaviors leaders should expect as people move through change, few have explored the why. And without that knowledge, it can be challenging for leaders to create supportive, psychologically healthy workplaces that support people as they adapt. Because adapt they must. The key to unlocking the potential of emotional intelligence is first to understand the construct and then identify the areas for development. The same goes for AQ. 


Developer Experience vs. User Experience

Retaining developers requires more than first impressions. Just as good UX needs to be evaluated, refined, and tested over time, good DX is an investment in the long term. You won’t know how well you’ve succeeded without using analytics to evaluate your DX and test changes. Monitoring your API helps you identify users who have not been able to successfully make API calls, find patterns of success and failure for developers, and see how different users are engaging with your product over time. While tracking UX metrics is relatively straightforward for products focused on end-users, DX metrics differ in important ways. You need to develop a good strategy for API analytics so that you track relevant business value metrics while avoiding vanity metrics. ... You need to understand DX when you build products for developers so that you can attract developer users, inspire their confidence and creativity, and support their increasingly complex integrations over time. Building good UX and DX can be challenging, but with the right analytics stack, you can monitor your API and use metrics to craft the perfect API developer experience.



Quote for the day:

"Taking charge of your own learning is a part of taking charge of your life, which is the sine qua non in becoming an integrated person." -- Warren G. Bennis