Daily Tech Digest - July 28, 2022

The Beautiful Lies of Machine Learning in Security

The biggest challenge in ML is availability of relevant, usable data to solve your problem. For supervised ML, you need a large, correctly labeled dataset. To build a model that identifies cat photos, for example, you train the model on many photos of cats labeled "cat" and many photos of things that aren't cats labeled "not cat." If you don’t have enough photos or they're poorly labeled, your model won't work well. In security, a well-known supervised ML use case is signatureless malware detection. Many endpoint protection platform (EPP) vendors use ML to label huge quantities of malicious samples and benign samples, training a model on "what malware looks like." These models can correctly identify evasive mutating malware and other trickery where a file is altered enough to dodge a signature but remains malicious. ML doesn't match the signature. It predicts malice using another feature set and can often catch malware that signature-based methods miss. However, because ML models are probabilistic, there's a trade-off. ML can catch malware that signatures miss, but it may also miss malware that signatures catch. 


6 Machine Learning Algorithms to Know About When Learning Data Science

Decision trees are models that resemble a tree like structure containing decisions and possible outcomes. They consist of a root node, which forms the start of our tree, decision nodes which are used to split the data based on a condition, and leaf nodes which form the terminal points of the tree and the final outcome. Once a decision tree has been formed, we can use it to predict values when new data is presented to it. ... Random Forest is a supervised ensemble machine learning algorithm that aggregates the results from multiple decision trees, and can be applied to classification and regression based problems. Using the results from multiple decision trees is a simple concept and allows us to reduce the problem of overfitting and underfitting experienced with a single decision tree. To create a Random Forest we first need to randomly select a subset of samples and features from the main dataset, a process known as “Bootstraping”. This data is then used to build a decision tree. Carrying out bootstrapping avoids issues of the decision trees being highly correlated and improves model performance.


Data science isn’t particularly sexy, but it’s more important than ever

Not only is data cleansing an essential part of data science, it’s actually where data scientists spend as much as 80% of their time. It has ever been thus. As Mike Driscoll described in 2009, such “data munging” is a “painful process of cleaning, parsing and proofing one’s data.” Super sexy! Now add to that drudgery the very real likelihood that enterprises, as excited as they are to jump into data science, many lack “a suitable infrastructure in place to start getting value out of AI,” as Jonny Brooks has articulated: The data scientist likely came in to write smart machine learning algorithms to drive insight but can’t do this because their first job is to sort out the data infrastructure and/or create analytic reports. In contrast, the company only wanted a chart that they could present in their board meeting each day. The company then gets frustrated because they don’t see value being driven quickly enough and all of this leads to the data scientist being unhappy in their role. As I have written before: “Data scientists join a company to change the world through data, but quit when they realize they’re merely taking out the data garbage.”


Top 7 Skills Required to Become a Data Scientist

Having a deep understanding of machine learning and artificial intelligence is a must to have to implement tools and techniques in different logic, decision trees, etc. Having these skill sets will enable any data scientist to work and solve complex problems specifically that are designed for predictions or for deciding future goals. Those who possess these skills will surely stand out as proficient professionals. With the help of machine learning and AI concepts, an individual can work on different algorithms and data-driven models, and simultaneously can work on handling large data sets such as cleaning data by removing redundancies. ... The base of establishing your career as a data science professional will require you to have the ability to handle complexity. One must ensure to have the capability to identify and develop both creative and effective solutions as and when required. You might face challenges in finding out ways to develop any solution that possibly needs to have clarity in concepts of data science by breaking down the problems into multiple parts to align them in a structured way.


The Psychology Of Courage: 7 Traits Of Courageous Leaders

Like so many complex psychological human characteristics, courage can be difficult to nail down. On the surface, courage seems like one of those “I know it when I see it” concepts. In my twenty years spent facilitating and coaching innovation, creativity, strategy and leadership programs, and in partnership with Dr. Glenn Geher of the Psychology Department of the State University of New York at New Paltz, I’ve identified behavioral attributes that often correlate with a person’s access to their courage. Each attribute has influential effects on organizational culture at all levels. Fostering these attributes in your own life (at work and beyond) and within your team can help you lead toward the courageous future you’re striving to achieve. ... Courage requires taking intentional risks. And the bigger the risk, the more courage it takes (and the bigger the outcome can be). Those who understand the importance of facing fear and being vulnerable, who accept that falling and getting up again is part of the journey, tend to have quicker access to their courage.


There is a path to replace TCP in the datacenter

"The problem with TCP is that it doesn't let us take advantage of the power of datacenter networks, the kind that make it possible to send really short messages back and forth between machines at these fine time scales," John Ousterhout, Professor of Computer Science at Stanford, told The Register. "With TCP you can't do that, the protocol was designed in so many ways that make it hard to do that." It's not like the realization of TCP's limitations is anything new. There has been progress to bust through some of the biggest problems, including in congestion control to solve the problem of machines sending to the same target at the same time, causing a backup through the network. But these are incremental tweaks to something that is inherently not suitable, especially for the largest datacenter applications (think Google and others). "Every design decision in TCP is wrong for the datacenter and the problem is, there's no one thing you can do to make it better, it has to change in almost every way, including the API, the very interface people use to send and receive data. It all has to change," he opined.


Typemock Simplifies .NET, C++ Unit Testing

When testing legacy code, you need to test small parts of the logic one by one, such as the behavior of a single function, method or class. To do that the logic must be isolated from the legacy code, he explained. As Jennifer Riggins explained in a previous post, unit testing differs from integration testing, which focuses on the interaction between these units or components, and catches errors at the unit level earlier, so the cost of fixing them is dramatically reduced. ... Typemock uses special code that can intersect with the flow of the software, and instead of calling the real code, it doesn’t matter whether it’s a real method or a virtual method, it can intercept it, and you can fake different things in the code, he said. Typemock has been around since 2004 when Lopian launched the company with Roy Osherove, a well-known figure in test-driven development. They first released Typemock Isolator in 2006, a tool for unit testing SharePoint, WCF and other .NET projects. Isolator provides an API helps users write simple and human-readable tests that are completely isolated from the production code.


Why Web 3.0 Will Change the Current State of the Attention Economy Drastically

The attention economy requires improvements, and Web 3.0 is capable of making them happen. In the foreseeable future, it will drastically change the interplay between consumers, advertisers and social media platforms. Web 3.0 will give power to the people. It may sound pompous, but it's true. How is that possible? Firstly, Web 3.0 will grant users ownership of their data, so you'll be able to treat your data like it's your property. Secondly, it will enable you to be paid for the work you are doing when making posts and giving likes on social media. Both options provide you with the opportunity to monetize the attention that you give and receive. The agreeable thing about Web 3.0 is that it's all about honest ownership. If a piece of art can be an NFT with easily traceable ownership, your data can be too. If you own your data, you can monetize or offer it on your terms, knowing who is going to use it and how. For instance, there is Permission, a tokenized Web 3.0 advertising platform that connects brands with consumers, with the latter getting crypto rewards for their data and engagement. 


Serverless-first: implementing serverless architecture from the transformation outset

While a serverless-first mindset provides a range of benefits, some businesses may be hesitant to make the transition due to concerns around cloud provider security, vendor lock-in, sunk costs from other strategies and ongoing issues with debugging and development environments. However, even among the most serverless-adverse, this mindset can provide benefits to a select part of an organisation. Take for example a bank’s operations. While the maintenance of a traditional network infrastructure is crucial for uptime of the underlying database, with a serverless approach they have the freedom to implement an agile mindset with consumer-facing apps and technologies as demand grows. Agile and serverless strategies typically go hand-in-hand, and both can encourage quick development, modification and adaptation. In relation to concerns around vendor lock-in, some organisations may look towards a cloud-agnostic strategy. However, writing software for multiple clouds removes the ability to use features offered by one specific cloud, meaning any competitive advantage of using a specific vendor is then lost. 


CISO in the Age of Convergence: Protecting OT and IT Networks

Pan Kamal, head of products at BluBracket, a provider of code security solutions, says one of the first steps an organization can take is to create an IT-OT convergence task force that maps out the asset inventory and then determine where IT security policy needs to be applied within the OT domain. “Review industry-specific cybersecurity regulations and prioritize implementation of mandatory security controls where called for,” Kamal adds. “I also recommend investing in a converged dashboard -- either off the shelf or create a custom dashboard that can identify vulnerabilities and threats and prioritize risk by criticality.” Then, organizations must examine the network architecture to see if secure connections with one-way communications -- via data diodes for example -- can eliminate the possibility of an intruder coming in from the corporate network and pivoting to the OT network Another key element is conducting a review of security policies related to both the equipment and the software supply chain, which can help identify secrets in code present in git repositories and help remediate them prior to the software ever being deployed.



Quote for the day:

"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik

Daily Tech Digest - July 27, 2022

Zycus: Five digital transformation trends in procurement

Generating vast quantities of data, organisations need to be aware of the level of data management required in order to successfully deliver a digital transformation in procurement. “Not understanding the data implications may result in budget overruns, overtime, or scope reduction in data management. Data is a key input for many processes and decisions in modern organisations, and underestimating its relevance can cause an inability to meet goals related to supplier enablement or PO automation due to capacity and scope constraints,” said Zycus. When it comes to the quality of data, process digitalisation is a key driver. Process digitalisation reduces human error; generates greater business insights, improves decision-making capabilities, and increase value creation. ... “In recent years, Procurement departments have become more prone to cyberattacks in the form of malware via a software update, attacks on cloud services, ransomware, business email compromise, attack on supply chain, etc.,” commented Zycus. Such threats can result in a loss of sensitive data and/or financial losses. 


Striving for a better balance in the future of work

It is worth noting that the principle of coordinated working hours in offices grew out of working patterns in factories at a time when the technology for business was mainly an in-person exercise. Yet, as everyone who has been through the pandemic knows, knowledge workers no longer work that way ‚ we’re asynchronous, remote, and international. In many senses, this change in expectations is no change at all. Knowledge work has always been marked by a sense of asynchronicity. People meet, talk, agree, and then go off and work in small groups or alone. What has changed is that 65% of workers now have, and expect, more flexibility to decide when they work. ... Perhaps one of the most boringly predictable challenges remote workers face involves the tools they’re asked to use. On average, workers have 6.2 apps sending them notifications at work, and 73% of them respond to those outside of working hours, further eroding the division between (asynchronous) work time and personal time. ... A worker may find that they do their work at times that suit them best, but still feel pressurized to pretend to be present the rest of the time, too.


The Metaverse can shake up Digital Commerce forever

The metaverse has already become a playground for luxury fashion brands, with some launching their new collection in the virtual world and others partnering up with developers to create their own bespoke games. In the near future, we anticipate more brands to follow and break the boundaries between virtual and physical reality to create more innovative, meaningful interactions with consumers. We are in the very early days here and our team will be working on many different pilots and experiments. There are several use-cases for Web 3.0 in e-commerce. For example, brands looking to connect with loyal users and fans can provide additional value by way of gated commerce enabled through NFTs. At the same time, brands and artists can use NFTs to build and monetize communities. We can create immersive shopping experiences in the Virtual Worlds/Metaverse, an ever-expanding world of real-time, with the help of virtual spaces in 3D. We can also enable e-commerce landscapes based on the Blockchain that will allow anyone to trade physical products on-chain.


SaaS Security Risk and Challenges

SaaS providers are unlikely to send infrastructure- and application-level security event logs to customers’ security information and event management (SIEM) solutions, leaving customers’ security operations teams lacking in terms of important information. This diminishes the ability to identify and manage potential security incidents. For example, it can be difficult to know whether and when a brute-force password replay attack is perpetrated against a SaaS customer user account. Such attacks could lead to undetected data breaches, resulting in the organization being considered liable for the data leak and for not reporting the incident to the appropriate parties (e.g., employees, customers, authorities) in a timely manner. ... It can be challenging for customers to understand the fundamental nature of a SaaS provider’s risk culture. Audits, certifications, questionnaires, and other materials paint a narrow picture of the providers’ security posture. Moreover, SaaS providers are unlikely to share their risk register with customers, as this would reveal excessive details about the SaaS provider’s security posture. Further, SaaS providers are unlikely to undergo detailed customer audits due to limited resources. 


Optimize Distributed AI Training Using Intel® oneAPI Toolkits

Supervised learning requires large amounts of labeled data. Labeling and annotation must be done manually by human experts, so it is laborious and expensive. Semi-supervised learning is a technique where both labeled and unlabeled data are used to train the model. Usually, the number of labeled data points is significantly less than the unlabeled data points. Semi-supervised learning exploits patterns and trends in data for classification. Semi-supervised learning is a technique where both labeled and unlabeled data are used to train the model. Usually, the number of labeled data points is significantly less than the unlabeled data points. Semi-supervised learning exploits patterns and trends in data for classification. S-GANs tackle the requirement for vast amounts of training data by generating data points using generative models. The generative adversarial network (GAN) is an architecture that uses large, unlabeled datasets to train an image generator model via an image discriminator model. GANs comprise two models: generative and discriminative. 


The rise of adaptive cybersecurity

The desirable end state - easier said than done - is to embrace an adaptive cybersecurity posture, supported by people, process and technology - that is more responsive to the dynamism of today's cybersecurity landscape. As research firm Ecosystm notes, "anticipating threats before they happen and responding instantly when attacks occur is critical to modern cybersecurity postures. It is equally important to be able to rapidly adapt to changing regulations. Companies need to move towards a position where monitoring is continuous, and postures can adapt, based on risks to the business and regulatory requirements. This approach requires security controls to automatically sense, detect, react, and respond to access requests, authentication needs, and outside and inside threats, and meet regulatory requirements." Adaptation is also likely in future to involve artificial intelligence. A golden example of applying AI adaptively for cybersecurity would be to be able to detect the presence of code, packages or dependencies that are impacted by zero-days or other vulnerabilities, and to block those threats. 


The Software Architecture Handbook

One problem that comes up when implementing microservices is that the communication with front-end apps gets more complex. Now we have many servers responsible for different things, which means front-end apps would need to keep track of that info to know who to make requests to. Normally this problem gets solved by implementing an intermediary layer between the front-end apps and the microservices. This layer will receive all the front-end requests, redirect them to the corresponding microservice, receive the microservice response, and then redirect the response to the corresponding front-end app. The benefit of the BFF pattern is that we get the benefits of the microservices architecture, without over complicating the communication with front-end apps. ... Horizontally scaling on the other hand, means setting up more servers to perform the same task. Instead of having a single server responsible for streaming we'll now have three. Then the requests performed by the clients will be balanced between those three servers so that all handle an acceptable load.


The Rise of Domain Experts in Deep Learning

Nowadays, a lot of it is people who are like, “Oh, my god, I feel like deep learning is starting to destroy expertise in my industry. People are doing stuff with a bit of deep learning that I can’t even conceive of, and I don’t want to miss out.” Some people are looking a bit further ahead, and they’re more, like, “Well, nobody is really using deep learning in my industry, but I can’t imagine it’s the one industry that’s not going to be affected, so I want to be the first.” Some people definitely have an idea for a company that they want to build. The other thing we get a lot of is companies sending a bunch of their research or engineering teams to do the course just because they feel like this is a corporate capability that they ought to have. And it’s particularly helpful with the online APIs that are out there now that people can play around with — Codex or DALL-E or whatever — and get a sense of, “Oh, this is a bit like something I do in my job, but it’s a bit different if I could tweak it in these ways.” However, these models also have the unfortunate side effect, maybe, of increasing the tendency of people to feel like AI innovation is only for big companies, and that it’s outside of their capabilities.


Q&A: Dropbox exec outlines company's journey into a remote-work world

"Underneath virtual first is a number of tenets that define how we think about the future of work. One of those is ‘asynchronous by default,' the idea being that if we're going to have people working remotely, that shouldn't mean they spend eight hours a day on video calls. Instead, at Dropbox, you're measured on your output and the impact that you make, rather than how many meetings you can sit in. "That then led us to think about how much time we should be spending in meetings, and as a result, we rolled out something called ‘core collaboration hours’ where employees reserve four hours each day to be available for meetings. That means there’s times when you're open to meet with your team or anyone else in the company, but also that you've got those other four hours in the day to focus on the work that you need to do. "Does that mean you wouldn't flex that to meet with somebody who's in a different time zone or something else? Absolutely not. It's your time to manage as an individual, because we're measuring you on the impact and output that you're making.


India poised to be at the center of metaverse-based gaming

Much before metaverse became popular, VR games like Minecraft and Roblox had captivated scores of young gamers. The immersive gaming experience delivered by AR/VR and the rapid growth of devices powered by AR/VR and XR has further accelerated the growth of metaverse to the current level. Meanwhile, the growth of high-speed Internet has acted as the catalyst driving this transformation. While VR heads top the list of gaming devices in the metaverse, mobile phones, gaming PCs, gaming consoles, and hearable/wearables are also evolving to suit the demands of metaverse applications. Metaverse also blends games with other apps like live streaming, cryptocurrencies, and social media, creating several possibilities for players to transact across the ecosystem chain. For example, gamers can use the NFTs/cryptocurrencies in metaverse to purchase digital assets, which they can preserve for another game, maybe from a different publisher. Thus players will earn greater value for money while also enjoying a near-real world gaming experience with possibilities never imagined before. 



Quote for the day:

"Most people live with pleasant illusions, but leaders must deal with hard realities." -- Orrin Woodward

Daily Tech Digest - July 26, 2022

Don’t get too emotional about emotion-reading AI

Unfortunately, the “science” of emotion detection is still something of a pseudoscience. The practical trouble with emotion detection AI, sometimes called affective computing, is simple: people aren’t so easy to read. Is that smile the result of happiness or embarrassment? Does that frown come from a deep inner feeling, or is it made ironically or in jest. Relying on AI to detect the emotional state of others can easily result in a false understanding. When applied to consequential tasks, like hiring or law enforcement, the AI can do more harm than good. It’s also true that people routinely mask their emotional state, especially in in business and sales meetings. AI can detect facial expression, but not the thoughts and feelings behind them. Business people smile and nod and empathetically frown because it’s appropriate in social interactions, not because they are revealing their true feelings. Conversely, people might dig deep, find their inner Meryl Streep and feign emotion to get the job or lie to Homeland Security. In other words, the knowledge that emotion AI is being applied creates a perverse incentive to game the technology.


How AI and decision intelligence are changing the way we work

Technology can also provide a simple yet powerful AI tool for employees to use during their day-to-day activities. They can capture lessons learned as they work in real time, and adjust their actions when a corrective action is needed, also in real time. Throughout this process, AI defines actionable takeaways, shares insights and offers concise lessons learned (suggesting corrective actions, for example), all of which can boost the entire team’s performance. Since AI turns the data collected from daily work into actionable lessons learned, every team member can contribute to and draw on their team’s collective knowledge — and the entire company’s collective knowledge as well. The technology prompts them to capture their work, and it “knows” when a team member should see information relevant to their current task. AI ensures everyone has the right data at the right time, exactly when they need it. In this vision of a data-driven environment, access to data liberates and empowers employees to pursue new ideas, Harvard Business Review writes.


The emergence of multi-cloud networking software

Contrary to general perception, Hielscher argues that many enterprises do not voluntarily choose to operate within a multi-cloud environment. In many cases, the environment is thrust upon them through a merger, acquisition, or an isolated departmental choice that preceded a decision to consolidate architectures. "This results in organizational gaps, skill-set gaps, and contractual and spending overlaps," he explains. "As with any IT strategy, the first step is to establish which goals are to be addressed and the timeframes to address them in." Potential adopters should be prepared to spend both time and money when evaluating and comparing MCNS products. "For example, organizations should plan costs associated with staffing a team of engineers to see them through the evaluation process," Howell says. While virtually all large cloud-focused enterprises, and many smaller organizations, can benefit from the right MCNS, it's important to keep an eye on service and the bottom line. "Benefits to the enterprise must be greater than the cost of the solution," Howell warns.


Software Methodologies — Waterfall vs Agile vs DevOps

Software development projects that are clearly defined, predictable, and unlikely to undergo considerable change are best handled using the waterfall method. Typically, smaller, simpler undertakings fall under this category. Waterfall projects don't incorporate feedback during the development cycle, is rigid in their process definition, and offer little to no output variability. Agile methods are built on incremental, iterative development that promptly produces a marketable business product. The product is broken down into smaller pieces throughout incremental development, and each piece is built, tested, and modified. Agile initiatives don't begin with thorough definitions in place. They rely on ongoing feedback to guide their progress. In Agile development, DevOps is all about merging teams and automation. Agile development is adaptable to both traditional and DevOps cultures. In contrast to a typical dev-QA-ops organization, developers do not throw code over the wall in DevOps. In a DevOps setup, the team is in charge of overseeing the entire procedure.


Why you need to protect abandoned digital assets

The dangers posed by these abandoned assets are multifarious. Local digital assets can be usurped and used for malicious purposes, such as identity theft and credit card fraud. Not only does this leave organisations open to significant fines for breaches of data protection laws, there is the associated reputational harm caused by these incidents. “The risk depends what the connection is pointing to and what authentication or security measures have been put in place,” says Nahmias. “Security teams tend to be more lenient about connections to internal resources than they are about connections to external ones.” The distributed nature of modern enterprise means that networks are no longer spiders webs, but a complex mesh. While this is a far more robust form of network connectivity, there are also far more connections that need to be managed. As such, there is a potential risk of network connections from abandoned assets still being active, essentially permitting access to the rest of the corporate network. In many ways, this is a far greater risk to the organisation, as malicious actors could potentially obtain confidential information through these unsecured connections.


How the cybersecurity skills gap threatens your business

The deficit in skilled cybersecurity personnel is now directly affecting businesses’ ability to remain secure. The World Economic Forum has stated that 60 per cent would “find it challenging to respond to a cybersecurity incident owing to the shortage of skills within their team” and industry body ISACA found that 69 per cent of those businesses that have suffered a cyber attack in the past year were somewhat or significantly understaffed. The impacts can be devastating. Accreditation body ISC(2)’s Cybersecurity Workforce Study found that staff shortages were leading to misconfigured systems, tardy patching of systems, lack of oversight, insufficient risk assessment, lack of threat awareness and rushed deployments. With these shortages now jeopardising businesses’ ability to function, the hiring function is under significant pressure to up its game. To make matters worse, these shortages are expected to intensify. Last year the Department for Culture, Media and Sport (DCMS) predicted there would be an annual shortfall of 10,000 new entrants into the cybersecurity market but in its latest report, released in May, that was revised to 14,000 every year. 


Kanban vs Scrum: Differences

Kanban is a project management method that helps you visualize the project status. Using it, you can readily visualize which tasks have been completed, which are currently in progress, and which tasks are still to be started. The primary aim of this method is to find out the potential roadblocks and resolve them ASAP while continuing to work on the project at an optimum speed. Besides ensuring time quality, Kanban ensures all team members can see the project and task status at any time. Thus, they can have a clear idea about the risks and complexity of the project and manage their time accordingly. However, the Kanban board involves minimal communication. ... Scrum is a popular agile method ideal for teams who need to deliver the product in the quickest possible time. This involves repeated testing and review of the product. It focuses on the continuous progress of the product by prioritizing teamwork. With the help of Scrum, product development teams can become more agile and decisive while becoming responsive to surprising and sudden changes. Being a highly-transparent process, it enables teams and organizations to evaluate projects better as it involves more practicality and fewer predictions.


8 top SBOM tools to consider

Indeed, SBOMs are no longer just a good idea; they're a federal mandate. According to President Joe Biden's July 12, 2021, Executive Order on Improving the Nation’s Cybersecurity, they're a requirement. The order defines an SBOM as "a formal record containing the details and supply chain relationships of various components used in building software." It's an especially important issue with open-source software, since "software developers and vendors often create products by assembling existing open-source and commercial software components." Is that true? Oh yes. We all know that open-source software is used everywhere for everything. But did you know that managed open-source company Tidelift counts 92% of applications containing open-source components. In fact, the average modern program comprises 70% open-source software. Clearly, something needs doing. The answer, according to the Linux Foundation, Open Source Security Foundation (OpenSSF), and OpenChain are SBOMs. Stephen Hendrick, the Linux Foundation's vice president of research, defines SBOMs as "formal and machine-readable metadata that uniquely identifies a software package and its contents; it may include other information about its contents, including copyrights and license data.


The race to build a social media platform on the blockchain

DSCVR, a blockchain-based social network built on Dfinity’s Internet Computer protocol, has entered the race to build a scalable DeSo platform with $9 million in seed funding led by Polychain Capital. Other participants in the round include Upfront Ventures, Tomahawk VC, Fyrfly Venture Partners, Shima Capital and Bertelsmann Digital Media Investments (BDMI), according to the company. It’s a competitive space with plenty of startups and large companies racing to build a network that provides utility for its users. Earlier this month, ex-Coinbase employee Dan Romero secured $30 million led by a16z to develop Farcaster, a DeSo protocol that allows users to move their social identity across different apps. TechCrunch covered another seed-stage startup, Primitives, that raised a $4 million round in May for its own Solana-based DeSo network. Big tech is in the game, too — Twitter funds an offshoot of its service called BlueSky, an open-source DeSo project founded in 2019 that hasn’t gone live but is experimenting publicly with its development process.


7 ways to keep remote and hybrid teams connected

Marko Gargenta, CEO and founder of PlusPlus, a maker of internal training software that he founded after creating Twitter’s Twitter University, uses that idea to create company culture. It started at Twitter because he saw that some people had deep knowledge in topics that would benefit others. He started tapping them to give workshops and share that knowledge. Those 30-minute workshops were informal, in person, and wildly popular. “One in five engineers were regularly teaching classes,” he says. Those continued when the world went remote, but they shifted to canned videos. Those did not have the same impact. “People wanted human connection,” he says. “So, we started dialing the pendulum back toward live connection. Now they happen over Zoom but are very synchronous.” That has worked well. “If you look at ancient Greece,” says Gargenta, “Plato started The Academy. It was the place where people chasing ideas or mastery congregated, which created a sense of a culture. This pattern of people chasing mastery creates community. It’s what shaped ancient Greece, and all sorts of innovations came out of that.



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Laundry

Daily Tech Digest - July 25, 2022

Digital presenteeism is creating a future of work that nobody wants

While technology has enabled more employees to work remotely – bringing considerable benefits in doing so – it has also facilitated digital presenteeism, Qatalog and GitLab concluded. One solution is to make technology less invasive and "more considerate of the user and completely redesigned for the new way of work, rather than supporting old habits in new environments" – although this may be easier said than done. According to Raud, current solutions require a "radical redesign that is more considerate of the user and prioritizes their objectives, rather than simply capturing our attention." Culture shift is also necessary for async work to become normalized, says Rauf. This comes from the top, and starts with trust: "When leaders send a message to their team, make clear whether or not it needs an immediate response or better yet, schedule updates to go out when people are most likely online. If I message a team member at an odd hour, I prefix a 'for tomorrow' or 'no rush', so they know it's not an urgent issue."


Confronting the risks of artificial intelligence

Because AI is a relatively new force in business, few leaders have had the opportunity to hone their intuition about the full scope of societal, organizational, and individual risks, or to develop a working knowledge of their associated drivers, which range from the data fed into AI systems to the operation of algorithmic models and the interactions between humans and machines. As a result, executives often overlook potential perils (“We’re not using AI in anything that could ‘blow up,’ like self-driving cars”) or overestimate an organization’s risk-mitigation capabilities (“We’ve been doing analytics for a long time, so we already have the right controls in place, and our practices are in line with those of our industry peers”). It’s also common for leaders to lump in AI risks with others owned by specialists in the IT and analytics organizations. Leaders hoping to avoid, or at least mitigate, unintended consequences need both to build their pattern-recognition skills with respect to AI risks and to engage the entire organization so that it is ready to embrace the power and the responsibility associated with AI.


The AIoT Revolution: How AI and IoT Are Transforming Our World

AIoT is a growing field with many potential benefits. Businesses that adopt AIoT can improve their efficiency, decision-making, customization, and safety. ... Increased efficiency: By combining AI with IoT, businesses can automate tasks and processes that would otherwise be performed manually. This can free up employees to focus on more important tasks and increase overall productivity. Improved decision-making: By collecting data from various sources and using AI to analyze it, businesses can gain insights they wouldn’t otherwise have. It can help businesses make more informed decisions, from product development to marketing. Greater customization: Businesses can create customized products and services tailored to their customers’ needs and preferences using data collected from IoT devices. This can lead to increased customer satisfaction and loyalty. Reduced costs: Businesses can reduce their labor costs by automating tasks and processes. Additionally, AIoT can help businesses reduce their energy costs by optimizing their use of resources. Increased safety: By monitoring conditions and using AI to identify potential hazards, businesses can take steps to prevent accidents and injuries.


It's time for manufacturers to build a collaborative cybersecurity team

Despite the best laid plans, bear in mind that these are active, interconnected and dynamic systems. It’s impossible to separate physical and cybersecurity elements, as their role in business operations is so foundational. As the landscape for new technologies and best practices change, adapt along with it. Ensure the lines of communication are open, management maintains involvement in the process, and all the key parties across IT and OT are committed to working collaboratively to strengthen every element of security. These tenets will help manufacturing organizations stay nimble in the face of an ever-changing security landscape. As the convergence of IT and OT continues, the risk of cyberthreats will continue to rise along with it. Building a collaborative security team across both IT and OT will help to reduce organizational risk and fortify critical infrastructure. By involving leadership, setting a plan, and staying adaptable as things change, security leaders will be armed with a comprehensive security approach that supports near-term needs and offers long-term business sustainability.


Why diverse recruitment is the key to closing the cyber-security skills gap

When it comes to mitigating the ever-evolving cyber threat, diversity is a crucial, but often overlooked, factor. As cyber attacks are becoming increasingly culturally nuanced, it is important that we meet the challenge by drawing from a wide range of backgrounds and life experiences. Cyber attacks come from everywhere - from a wide range of ages, locations, and educational backgrounds - so our responders should too. Perceptions of cyber security often see it as revolving around highly complex technology and driven mainly by this. While tech clearly plays a crucial role in mitigating cyber attacks, successfully countering them would not be possible without the role performed by people. This is enriched hugely by having a workforce which covers as many educational and socio-economic backgrounds as possible. In making a concerted effort towards a more diverse workforce, the cyber-security industry will be able to gain a deeper awareness of the cultural nuances that underlie cyber attacks. It’s important to fully understand what we mean by diverse hiring. Considering entry routes into the industry is a big part of attracting a broader range of demographics. 


You have mountains of data, but do you know how to climb?

We have more data than ever before, but it is not enough to merely accumulate it. Dedicate time and resources to establishing digital governance to ensure the data you are using is clean, consistently implemented, and universally understood. ... The tech team is not solely responsible for the quality of our data—we all need to take ownership of and champion the data we use. Visualization tools bridge the gap between the tech team and the business team, doing away with barriers to entry and enabling end-to-end analytics. In this way, you can empower employees to immerse themselves in and take ownership of the data at hand. Users no longer have to submit a request to the tech team to create a report and twiddle their thumbs until it comes back. They can now take initiative and do it themselves, creating a more streamlined process and a more informed group of employees who can work quickly to make data-driven decisions. Furthermore, when you empower people to take control of their data and ask their own questions, they may uncover new insights they would never have found when presented with pre-packaged reports.


Software Supply Chain Concerns Reach C-Suite

From Cornell's perspective, DevOps — or hopefully, DevSecOps groups — should really spearhead the management of software supply chain risk. "They are the ones who own the software development process, and they see the code that is written," he says. "They see the components that are pulled in. They watch the software get built. And they make it available to whoever is next on down the line." Given this vantage point, they can help to impact — in a positive way — an organization's software supply chain security status by implementing good policies and practices around what open source code is included in their software and when those open source components are upgraded. "Forward-leaning DevSecOps teams can take advantage of their automation and testing to start pushing for more aggressive component-upgrade life cycles and other approaches that help minimize technical debt," he explains. He says they’re also in a position and own the tooling to help generate SBOMs that they can then provide to software consumers who are in turn looking to manage their supply chain risk.


Know Your Risks – and Your Friends’ Risks, Too

Identifying risks and documenting response actions are only part of the equation. Crucial to the overall C-SCRM process is the communication and education of all parties involved about organizational risks and how to respond. Organizations must ensure that all personnel and third-party partners are trained on supply chain risks, encourage awareness from the top down, and involve partners and suppliers in organization-wide tests and assessments of response plans. Organizations should establish open communications with their supplier partners about risk concerns and encourage partners to do the same in return. The general idea is individual strength through community strength. As an organization matures its C-SCRM (or overall cybersecurity) process, lessons learned and best practices should be shared along the way to help bolster others’ programs. The concept of C-SCRM is not a new one. In fact, there are many sources that have provided guidance on the topic over the years. The National Institute of Standards and Technology (NIST) has a Special Publication (SP) 800-161 and an Internal Report (IR) 8276 on the subject. 


3 data quality metrics dataops should prioritize

The good news is that as business leaders trust their data, they’ll use it more for decision-making, analysis, and prediction. With that comes an expectation that the data, network, and systems for accessing key data sources are available and reliable. Ian Funnell, manager of developer relations at Matillion, says, “The key data quality metric for dataops teams to prioritize is availability. Data quality starts at the source because it’s the source data that run today’s business operations.” Funnell suggests that dataops must also show they can drive data and systems improvements. He says, “Dataops is concerned with the automation of the data processing life cycle that powers data integration and, when used properly, allows quick and reliable data processing changes.” Barr Moses, CEO and cofounder of Monte Carlo Data, shares a similar perspective. “After speaking with hundreds of data teams over the years about how they measure the impact of data quality or lack thereof, I found that two key metrics—time to detection and time to resolution for data downtime—offer a good start.”


How Optic Detects NFT Fraud with AI and Machine Learning

The NFT space has ongoing issues with fraud, including through bad actors wholesale lifting art from one project and using it in a second project — a process often referred to as “copyminting.” They are derivative projects that have a few too many similarities to the original project to be considered anything other than a ripoff. While most of these duplicate projects do very little sales volume relative to the original, they may damage the underlying brand, contribute to the overall distrust of the NFT space, or trick less savvy buyers into spending money on something that’s the jpg equivalent of a street vendor shilling fake Rolex watches. To help combat this fraud, a few companies are emerging that specialize in fraud detection in NFTs. They tend to leverage blockchain data to help determine which project came first and apply some image detection to find metadata matches. One of these solutions is Optic, which uses artificial intelligence and machine learning to analyze the images associated with an NFT, which helps NFT marketplaces and minting platforms catch copies and protect both creators and buyers.



Quote for the day:

"The test we must set for ourselves is not to march alone but to march in such a way that others will wish to join us." -- Hubert Humphrey

Daily Tech Digest - July 24, 2022

AI can see things we can’t – but does that include the future?

“What we focus on is augmented intelligence for humans to take action [on],” says Radtke when I raise this concern. “We are not prescribing the action to be taken based on the insights that we get – we're trying to make sure that the human has all the necessary intelligence to drive the behavior that they need to drive. We're reporting facts back – this actually happened here, this is what has happened in the past – and you can take action based on that. It's all about driving improved safety for everyone in that area.” When I press him on the possible human rights concern and the inevitable pushback that will arise if AI is routinely used to pre-emptively police areas deemed as problematic, he answers: “I think that with every technology that's ever been out there in history there is always a way to use it for non-good. I think you have to focus on the good that it can provide and make sure that you police the non-good behavior that could happen from it.” This will entail some sort of oversight. “There are consortiums out there to help drive the ethical adoption of AI throughout the industry – we definitely keep aware of those.


RPA vs. BPA: Which approach to automation should you use?

Where BPA and RPA overlap, according to Mullakara, is the goal of eliminating human intervention in order to process multiple automation. “The whole idea of BPA was to remove people from the process and that's kind of what RPA is also aiming for. In the sense of the simple workflow automation, both can do it. RPA does it through a UI integration whereas BPA does it mostly with APIs. And you know, automating the workflow with the systems by invoking the systems,” he tells us. However, Taulli explains that automation really won’t get rid of people at this point and it will be the usual suspects that will, such as recessions. Mullakara agrees that this messaging for BPA and RPA is a common misconception and has earned both technologies quite a bad rap. “So, what you actually automate with RPA for example is tasks – it's not jobs. It's not an entire job even if it's a process. It’s not jobs, so we still need people,” he says. 


All the Things a Service Mesh Can Do

Many organizations have different teams and services dispersed across different networks and regions of a given cloud. Many also have services deployed across multiple cloud environments. Securely connecting these services across different cloud networks is a highly desirable function that typically requires significant effort by network teams. In addition, limitations that require non-overlapping Classless Inter-Domain Routing (CIDR) ranges between subnets can prevent network connectivity between virtual private clouds (VPCs) and virtual networks (VNETs). Service mesh products can securely connect services running on different cloud networks without requiring the same level of effort. HashiCorp Consul, for example, supports a multidata center topology that uses mesh gateways to establish secure connections between multiple Consul deployments running in different networks across clouds. Team A can deploy a Consul cluster on EKS. Team B can deploy a separate Consul cluster on AKS. Team C can deploy a Consul cluster on virtual machines in a private on-premises data center. 


Snowballing Ransomware Variants Highlight Growing Threat to VMware ESXi Environments

The proliferation of ransomware targeting ESXi systems poses a major threat to organizations using the technology, security experts have noted. An attacker that gains access to an EXSi host system can infect all virtual machines running on it and the host itself. If the host is part of a larger cluster with shared storage volumes, an attacker can infect all VMs in the cluster as well, causing widespread damage. "If a VMware guest server is encrypted at the operating system level, recovery from VMware backups or snapshots can be fairly easy," McGuffin says. '[But] if the VMware server itself is used to encrypt the guests, those backups and snapshots are likely encrypted as well." Recovering from such an attack would require first recovering the infrastructure and then the virtual machines. "Organizations should consider truly offline storage for backups where they will be unavailable for attackers to encrypt," McGuffin adds. Vulnerabilities are another factor that is likely fueling attacker interest in ESXi. VMware has disclosed multiple vulnerabilities in recent months.


5 typical beginner mistakes in Machine Learning

Tree-based models don’t need data normalization as feature raw values are not used as multipliers and outliers don’t impact them. Neural Networks might not need the explicit normalization as well — for example, if the network already contains the layer handling normalization inside (e.g. BatchNormalization of Keras library). And in some cases, even Linear Regression might not need data normalization. This is when all the features are already in similar value ranges and have the same meaning. For example, if the model is applied for the time-series data and all the features are the historical values of the same parameter. In practice, applying unneeded data normalization won’t necessarily hurt the model. Mostly, the results in these cases will be very similar to skipped normalization. However, having additional unnecessary data transformation will complicate the solution and will increase the risk of introducing some bugs.


Git for Network Engineers Series – The Basics

Version control systems, primarily Git, are becoming more and more prevalent outside of the realm of software development. The increase in DevOps, network automation, and infrastructure as code practices over the last decade has made it even more important to not only be familiar with Git, but proficient with it. As teams move into the realm of infrastructure as code, understanding and using Git is a key skill. ... Unlike other Version Control Systems, Git uses a snapshot method to track changes instead of a delta-based method. Every time you commit in Git, it basically takes a snapshot of those files that have been changed while simply linking unchanged files to a previous snapshot, efficiently storing the history of the files. Think of it as a series of snapshots where only the changed files are referenced in the snapshot, and unchanged files are referenced in previous snapshots. Git operations are local, for the most part, meaning it does not need to interact with a remote or central repository. 


Deep learning delivers proactive cyber defense

The timing couldn’t be better. The increasing availability of ransomware-as-a-service offerings, such as ransomware kits and target lists, are making it easier than ever for bad actors—even those with limited experience—to launch a ransomware attack, causing crippling damage in the very first moments of infection. Other sophisticated attackers use targeted strikes, in which the ransomware is placed inside the network to trigger on command. Another cause for concern is the increasing disappearance of an IT environment’s perimeter as cloud compute storage and resources move to the edge. Today’s organizations must secure endpoints or entry points of end-user devices, such as desktops, laptops, and mobile devices, from being exploited by malicious hackers—a challenging feat, according to Michael Suby, research vice president, security and trust, at IDC. “Attacks continue to evolve, as do the endpoints themselves and the end users who utilize their devices,” he says. “These dynamic circumstances create a trifecta for bad actors to enter and establish a presence on any endpoint and use that endpoint to stage an attack sequence.”


Towards Geometric Deep Learning III: First Geometric Architectures

The neocognitron consisted of interleaved S- and C-layers of neurons (a naming convention reflecting its inspiration in the biological visual cortex); the neurons in each layer were arranged in 2D arrays following the structure of the input image (‘retinotopic’), with multiple ‘cell-planes’ (feature maps in modern terminology) per layer. The S-layers were designed to be translationally symmetric: they aggregated inputs from a local receptive field using shared learnable weights, resulting in cells in a single cell-plane have receptive fields of the same function, but at different positions. The rationale was to pick up patterns that could appear anywhere in the input. The C-layers were fixed and performed local pooling (a weighted average), affording insensitivity to the specific location of the pattern: a C-neuron would be activated if any of the neurons in its input are activated. Since the main application of the neocognitron was character recognition, translation invariance was crucial. 


Don’t Just Climb the Ladder. Explore the Jungle Gym

Most of us do not approach work (or life) with a master plan in mind, and many of the steps we take are beautiful accidents that help us become who we are. “I’m 67 years old,” Guy said, “and I think I finally found my true calling.” He was referring to his podcast, Remarkable People, where he interviews exceptional leaders and innovators (think Jane Goodall, Neil deGrasse Tyson, Steve Wozniak, and Kristi Yamaguchi) about how they got to be remarkable. “In a sense, my whole career has prepared me for this moment. I’ve had decades of experience in startups and large companies. So that gives me the data to ask great questions that my listeners really want the answers to,” Guy said. Guy is undeniably brilliant, and his success is no accident. But still, he believes that luck has played a part in his success. In his words, “Basically, I’ve come to the conclusion that it’s better to be lucky than smart.” Maybe Guy is right. Or perhaps, the smartest people know when to take advantage of luck and act on the opportunities that present themselves. Whatever the case, it’s important to take calculated risks.


Should You Invest in a Digital Transformation Office?

With the digital transformation office comes a transformation team, who initiates organizational change. Laute says that it’s crucial that everyone inside the organization stand behind the transformation team if they truly want to see changes happening. “You need to have an environment where these people, the transformation lead and the transformation team, are allowed and are not afraid to speak up. These people shouldn't be biased, not just following what the executive board says, but really [being] able to challenge and to speak up. And they should have the freedom to call out if something is going in the wrong direction, may it be content or behavioral-wise,” she explains. And while clearly there can be technology-related challenges, Laute tells us that digital transformation is also a people problem, and calls for a change in culture and mindset in order to find success. The cultural shift, she explains, is truly where everything starts to come together in order to get the transformation going. “Digital [transformation] is not only technology. You need to change behaviors and you need to change processes. And most of the time, you change your target operating model, right?”



Quote for the day:

"Uncertainty is a permanent part of the leadership landscape. It never goes away." -- Andy Stanley

Daily Tech Digest - July 23, 2022

How CIOs can unite sustainability and technology

CIOs must be proactive in progressing these organizational shifts, as business leaders will continue to lean on them to ensure company technologies are providing solutions without contributing to an environmental problem. While in years past this was not an active concern, the information and communications technology (ICT) sector has recently become a larger source of climate-related impact. Producing only 1.5% of CO2 in 2007, the industry has now risen to 4% today and will potentially reach 14% by 2040. Fortunately, CIOs can course-correct by focusing on three key areas: Net zero - Utilize green software practices that can reduce energy consumption; Trust - Build systems that protect privacy and are fair, transparent, robust, and accessible; and Governance - Make ESG the focus of technology, not an afterthought. As a first step in this transition, CIOs can begin assessing their organization’s technology through the lens of sustainability to ensure that those goals are being thought about in every facet of the business. In addition, they can connect with other leaders in the company to encourage greater emphasis and dialogue in cross-organization planning for technology solutions as they relate to sustainability targets.


Design patterns for asynchronous API communication

Request and response topics are more or less what they sound like:A client sends a request message through a topic to a consumer; The consumer performs some action, then returns a response message through a topic back to the consumer. This pattern is a little less generally useful than the previous two. In general, this pattern creates an orchestration architecture, where a service explicitly tells other services what to do. There are a couple of reasons why you might want to use topics to power this instead of synchronous APIs:You want to keep the low coupling between services that a message broker gives us. If the service that’s doing the work ever changes, the producing service doesn’t need to know about it, since it’s just firing a request into a topic rather than directly asking a service. The task takes a long time to finish, to the point where a synchronous request would often time out. In this case, you may decide to make use of the response topic but still make your request synchronously. You’re already using a message broker for most of your communication and want to make use of the existing schema enforcement and backwards compatibility that are automatically supported by the tools used with Kafka.


What is Data Gravity? AWS, Azure Pull Data to the Cloud

As enterprises create ever more data, they aggregate, store, and exchange this data, attracting progressively more applications and services to begin analyzing and processing their data. This “attraction” is caused, because these applications and services require higher bandwidth and/or lower latency access to the data. Therefore, as data accumulates in size, instead of pushing data over networks towards applications and services, “gravity” begins pulling applications and services to the data. This process repeats, which produces a compounding effect, meaning that as the scale of data grows, it becomes “heavier” and increasingly difficult to replicate and relocate. Ultimately, the “weight” of this data being created and stored generates a “force” that results in an inability to move the data, hence the term data gravity. Data gravity presents a fundamental problem for enterprises, which is the inability to move data at-scale. Consequently, data gravity impedes enterprise workflow performance, heightens security & regulatory concerns, and increases costs.


Windows 11 is getting a new security setting to block ransomware attacks

The new feature is rolling out to Windows 11 in a recent Insider test build, but the feature is also being backported to Windows 10 desktop and server, according to Dave Weston, vice president of OS Security and Enterprise at Microsoft. "Win11 builds now have a DEFAULT account lockout policy to mitigate RDP and other brute force password vectors. This technique is very commonly used in Human Operated Ransomware and other attacks – this control will make brute forcing much harder which is awesome!," Weston tweeted. Weston emphasized "default" because the policy is already an option in Windows 10 but isn't enabled by default. That's big news and is a parallel to Microsoft's default block on internet macros in Office on Windows devices, which is also a major avenue for malware attacks on Windows systems through email attachments and links. Microsoft paused the default internet macro block this month but will re-release the default macro block soon. The default block on untrusted macros is a powerful control against a technique that relied on end users being tricked into clicking an option to enable macros, despite warnings in Office against doing so.


Untangling Enterprise API Architecture with GraphQL

GraphQL is a query language that allows you to describe your data requirements in a more powerful and developer-friendly way than REST or SOAP. Its composability can help untangle enterprise API architecture. GraphQL becomes the communication layer for your services. Using the GraphQL specification, you get a unified experience when interacting with your services. Every service in your API architecture becomes a graph that exposes a GraphQL API. In this graph, everyone who wants to integrate or consume the GraphQL API can find all the data it contains. Data in GraphQL is represented by a schema that describes the available data structures, the shape of the data and how to retrieve it. Schemas must comply with the GraphQL specification, and the part of the organization responsible for the service can keep this schema coherent. GraphQL composability allows you to combine these different graphs — or subgraphs — into one unified graph. Many tools are available to create such a “graph of graphs."


How The Great Resignation Will Become The Great Reconfiguration

We are witnessing a great reconfiguration of how employees expect to be treated by employers. Henry Ford gave his workers a full two-day weekend as early as 1926, but now a weekend is expected in most office-based jobs—unless the job involves serving customers over the weekend! We have certain expectations of the employer and employee relationship, and what was normal before the pandemic is now being challenged. Even Wall Street cannot hold back the tide. People expect more flexibility over their hours and work location. Within a few years, this will be normalized by the effect of the top talent expecting it and that expectation fitering throughout company culture. This is how work will function post-pandemic. The Great Resignation is the first step, but eventually, I believe we will call the 2020s the Great Reconfiguration. ... WFH will live on - You might want your team back in the office, but they know they can be more productive remotely, and research backs up the employees. A new Harvard study suggests that all that in-person time can be compressed into just one or two days a week.


Will Your Cyber-Insurance Premiums Protect You in Times of War?

Due to the changing market and geopolitical situation, you need to be keenly aware of the exact kind of cyber-insurance coverage your organization requires. Your decisions should be dictated by the industry you're working in, the security risk, and how much you stand to lose in the event of an attack. It's important to note that insurance providers are also being more stringent in their requirements for companies to even obtain cyber coverage in the first place. Carriers are increasingly requiring companies to practice good cyber hygiene and have rigid cybersecurity protocols in place before even offering a quote. Once you have proper cybersecurity protocols in place, you should better qualify for adequate plans. However, remember that no two plans are alike or equally inclusive. When choosing a plan, be sure to look for any fine print regarding act-of-war and terrorism exclusions or those for other "hostile acts." Even when you've done everything right, your carrier can still attempt to deny you coverage under these loopholes.


The new CIO playbook: 7 tips for success from day one

It’s possible that, up to now, your focus has been solely on technology. One of the big differentiators between working on an IT team, even in a leadership role, and being CIO is that you will need to understand how technology fits into the larger business goals of the company. You will need to be a technology translator and advocate for the CEO, business leadership, and board. For that, you have to understand the business first. “We can come up with creative technical solutions,” says Roberge. “We know you need an email system, a CRM system, and an ERP. But how does the business want to use those tools? How is the sales guy going sell product and be able to get a quote out, get the tax requirements, things like that?” Business leaders are unlikely to understand technology the way you do. So, you must understand the business in order to help the other business units, the CEO, and the board understand how technology can fit into their goals. “As technology experts, we know our technology extremely well,” says Roberge.


Explained: How to tell if artificial intelligence is working the way we want it to

Far from a silver bullet, explanation methods have their share of problems. For one, Ghassemi’s recent research has shown that explanation methods can perpetuate biases and lead to worse outcomes for people from disadvantaged groups. Another pitfall of explanation methods is that it is often impossible to tell if the explanation method is correct in the first place. One would need to compare the explanations to the actual model, but since the user doesn’t know how the model works, this is circular logic, Zhou says. He and other researchers are working on improving explanation methods so they are more faithful to the actual model’s predictions, but Zhou cautions that, even the best explanation should be taken with a grain of salt. “In addition, people generally perceive these models to be human-like decision makers, and we are prone to overgeneralization. We need to calm people down and hold them back to really make sure that the generalized model understanding they build from these local explanations are balanced,” he adds.


Future-Proofing Organisations Through Transparency

Partners that trust each other, perform better. Both parties should clearly understand the decisions and actions they own. Consequently, organisations cooperate with less friction and enhance accessibility to relevant information. A study in the Harvard Business Review notes that managers frequently adopt a trust but verify approach, evaluating potential partner behaviours during negotiations to determine whether they are open and honest. As one manager in the study advised, “To see if [the] person is forthcoming; ask a question you know the answer to”. Transparent companies are viewed as ‘ethical’ as their customers believe they have nothing to hide. The new era of the business-to-business model demands transparency. Companies want to know that what they do matters and trace a project back to their organisation’s vision. In a modern world where sustainability is not just a buzzword, clients want to know that partnerships are built with brands that support their morals. Unsatisfied customers disengage with a company to find one that works together to achieve a greater outcome and takes accountability for their actions. 



Quote for the day:

"People will not change their minds but they will make new decisions based upon new information." -- Orrin Woodward