Daily Tech Digest - June 19, 2023

Finding the Nirvana of information access control or something like it

In the mythical land of Nirvana, where everything is perfect, CISOs would have all the resources they needed to protect corporate information. The harsh reality, which each CISO experiences on the daily, is that few entities have unlimited resources. Indeed, in many entities when the cost-cutting arrives, it is not unusual for security programs that have not (so far) positioned themselves as a key ingredient in revenue preservation to be thrown by the wayside — if you ever needed motivation to exercise access control to information, there you have it. ... For those who thought they were finished with Boolean logic in secondary school, its back — and attribute-based access control (ABAC) is a prime example of the practicality of utilizing the logic in decision trees to determine access permission. The adoption of ABAC allows access to protected information to be “hyper-granular.” An individual’s access may be initially defined by one’s role and certainly fall within the established policies. 


Goodbyes are difficult, IT offboarding processes make them harder

To ensure that the business continues even though the employee is gone, stale accounts are created with grace periods during which the employee’s credentials can still be used to access the organization’s networks. This is great for retaining the knowledge this employee accumulated and ensuring that their replacement is well-briefed, but since the employee is gone, nobody will remember to monitor their account, as malicious actors will soon notice. This employee may also have been forwarding emails to their personal email account or accessing their work email from personal devices for business purposes, making it easier for hackers to obtain sensitive company data and impossible for the organization to know. Existing offboarding processes may frustrate business executives due to their rigidity – and they aren’t alone in their annoyance. What’s bad for security is also, inevitably, bad for business. Security teams today must manually ensure that all access privileges, including access to various systems, applications, databases and physical facilities, be promptly terminated.

Leaders are made, not born: Although this is technically correct, which is why we rarely see 5 year olds running companies or countries (though, in fairness, the adults that do often fail to provide convincing signs of superior emotional or intellectual maturity), people’s potential for leadership can be detected at a very young age. Furthermore, the dispositional enablers that increase people’s talent for leadership have a clear biological and genetic basis. ... The best leaders are confident: Not true. Although confidence does predict whether someone is picked for a leadership role, once you account for competence, expertise, intelligence, and relevant personality traits, such as curiosity, empathy, and drive, confidence is mostly irrelevant. And yet, our failure to focus on competence rather than confidence, and our lazy tendency to select leaders on style rather than substance (such as during presidential debates, job interviews, and short-term in person interactions), contributes to most of the leadership problems described in point 1. Note that when leaders have too much confidence they will underestimate their flaws and limitations, putting themselves and others at risk.


How Organizations Can Create Successful Process Automation Strategies

Organizations can promote more collaboration by adopting a modified “Center of Excellence” (CoE) approach. In some companies, that might mean assembling a community devoted to process automation tasks and strategies, in which practitioners can share best practices and ask questions of one another. The CoE should help members from business and IT teams work together better by coordinating tasks, avoiding reinventing projects from scratch, and generally empowering them to drive continuous improvement together. Some organizations may want to create a central focus on process automation without using the actual CoE term. The terminology itself carries some legacy baggage from centralized Business Process Management (BPM) software. Some relied on a centralized approach for their CoE, counting on one team to implement process automation for the entire organization. That approach often led to bottlenecks for both developers and a line of business leaders, giving the CoE a bad reputation with few demonstrable results.


8 habits of highly secure remote workers

By working in a public place you are exposing yourself to serious cybersecurity risks. The first, and most direct one is over-the-shoulder attacks, also known as shoulder surfing. All this takes is for an observant, determined hacker to be sitting in the same space as you paying close attention to your every move. ... "As you use public Wi-Fi, you are exposing your laptop or your device to the same network somebody else can log on to so that means they can actually peruse through your network, depending on the security of the local network on your laptop," says Gartner VP Analyst, Patrick Hevesi. Doing work in a public space while also not using public Wi-FI may seem like a paradox, but there are simple and secure solutions. The first is using a VPN when accessing corporate information in public. ... "Your security is as good as your password, because that's the first first line of defense," says Shah. "You want to make sure that you have a good strong password, and also don't use the same password for all the other sites you may be accessing."


Multicloud deployments don't have to be so complicated

The solution to these problems is not scrapping a complex cloud deployment. Indeed, considering the advantages that multicloud can bring (cost savings and the ability to leverage best-of-breed solutions), it’s often the right choice. What gets enterprises in trouble is the lack of an actual plan that states where and how they will store, secure, access, manage, and use all business data no matter where it resides. It’s not enough to push inventory data to a single cloud platform and expect efficiencies. We’re only considering data complexity here; other issues also exist, including access to application functions or services and securing all systems across all platforms. Data is typically where enterprises see the problems first, but the other matters will have to be addressed as well. A solid plan tells a complete data access story and includes data virtualization services that can make complex data deployments more usable by business users and applications. It also enables data security and compliance using a software layer that can reduce complexity with abstraction and automation. Simple data storage is only a tiny part of the solution you need to consider.


E-Commerce Firms Are Top Targets for API, Web Apps Attacks

Attack vectors, such as server-side template injection, server-side request forgery and server-side code injection, have also become popular and may lead to data exfiltration and remote code execution. "This, in turn, may be playing a role in preventing online sales and damaging a company's reputation," the researchers said, citing an Arcserve survey in which 60% of consumers said they wouldn't buy from a website that had been breached in the previous 12 months. SSTI is a hacker favorite for zero-day attacks. Its use is well-documented in "some of the most significant vulnerabilities in recent years, including Log4j," the researchers said. Hackers mainly targeted commerce companies with Log4j, and 58% of all exploitation attempts happened in the space. The Hafnium criminal group popularized SSRFs, which they used to attack Microsoft's Exchange Servers and reportedly launched a supply chain cyberattack that affected 60,000 organizations, including commerce. Hafnium used the SSRF vulnerability to run commands to the web servers, according to the report.


It’s going to take AI to power AI

AI in the datacentre has the ability to act as a pair of eyes, keeping a keen watch on every aspect of the facility to detect and prevent threats. Analysing data from sources such as online access logs and network traffic would allow AI systems to watch for and alert organisations to cyber breaches in seconds. Further, we’re heading in the direction where AI-powered sensors could apply human temperature checks and facial recognition to monitor for physical intrusions. Ultimately, AI will have the opportunity to tune datacentres to operate like well-oiled machines, making sure all components work in harmony to deliver the highest level of performance in our AI-hungry world – a world pressurised by a cost-of-energy crisis and expanding cyber security threats. While the reality is more nuanced, put plainly, it is going to take AI to power AI. In fact, Gartner estimates that half of all cloud datacentres will use AI by 2025. It’s going to be a productive couple of years for industry developing one of the fastest-growing technologies, rolling it out, and doing so in a way that ensures trust.


Beyond ChatGPT: What is the Business Value of Generative Artificial Intelligence?

Beyond the attraction to the technology itself, generative AI has huge potential business value. Regardless of the processes, professions, or sectors of activity involved, the common thread among artificial intelligence projects is their shared objective of enabling, expediting, or enhancing human actions, either by facilitating or accelerating them. The use of AI usually starts with a question, or a problem. This is immediately followed by the analysis of a significant amount of exogenous information or endogenous information, with the aim of obtaining an answer to the question or problem through the creation of information useful to humans: aiding decision-making, detecting an anomaly, analyzing a hand-drawn schema, prioritizing problems to be solved, etc. More broadly, the automated generation of information makes it easier and safer to streamline some processes, such as moving from an idea to a first version by allowing for quicker validation or failure recognition, A/B testing, and simplified re-experimentation. 


Even in cloud repatriation there is no escaping hyperscalers

Hansson’s blog sparked pushback from cloud advocates like TelcoDR CEO Danielle Royston. She contended in an interview with Silverlinings that those using the cloud aren’t just paying for servers, but also for the proprietary tools the different cloud giants provide, the salaries they pay their top-tier developer talent, the hardware upgrades they make available to cloud users and the built-in security they offer. For those who use the cloud to its full potential, she said, the cloud is “the gift that keeps on giving.” Not only that, but those looking to repatriate workloads will need to invest significant time and money to transition back and hire more staff to develop new applications and manage the on-prem servers, she added. ... So, who’s right? Well, it seems the answer will vary by company and even by application. Pichai explained the cloud is the ideal environment for a small handful of workloads, namely “vanilla applications” which incorporate only standard rather than specialized features and “spikey applications” which need to scale on demand to accommodate irregular patterns of usage.



Quote for the day:

"To be an enduring, great company, you have to build a mechanism for preventing or solving problems that will long outlast any one individual leader" -- Howard Schultz

Daily Tech Digest - June 18, 2023

4 Advances In Penetration Testing Practices In 2023

Penetration testing has evolved significantly over the past few years, with a growing emphasis on mimicking real-life cyberattack scenarios for greater accuracy and relevance. By adopting more realistic simulation strategies, pen testers aim to emulate threats that an organization might realistically face in their operational environment, thereby providing valuable insights into susceptibilities and vulnerabilities. This approach entails examining an organization’s infrastructure from multiple angles, encompassing technological weaknesses as well as human factors such as employee behavior and resistance to social engineering attacks. ... With cyber threats constantly scaling and tech landscapes evolving at a rapid pace, automation enables organizations to efficiently identify potential weaknesses without sacrificing accuracy or thoroughness. Automated tools can expedite vulnerability assessment processes by scanning networks for known flaws or misconfigurations while continuously staying up-to-date with emerging threat information, significantly reducing manual workloads for security teams. 


Microservices vs. headless architecture: A brief breakdown

In general, the microservice-based approach requires that architects and developers determine exactly which microservices to build, which is not an easy task. Software teams must carefully assess how to achieve the best balance between application complexity and modularity when designing a microservices application. There are also few standards or guidelines that dictate the exact number of individual microservice modules an application should embody. While including too many microservices can add unnecessary development and operations overhead as well as compromise the architecture's flexibility, a headless architecture is much easier to design since there is still a clear definition between the front and the backend. Division of responsibilities will remain much clearer, and the relationship between components is less likely to get lost in translation. A single microservice-based application can easily represent dozens of individual services running across a complex cluster of servers. Each service must be deployed and monitored separately because each one could impact the performance of other microservices. 


The Power Of The Unconscious Mind: Overcoming Mental Obstacles To Success

Bringing our unconscious mind into alignment and reconciliation with our conscious mind requires a level of self-awareness that many people are unable to achieve independently. Individuals who are struggling with achieving goals and don’t know why may find it helpful to work with an objective outside observer, such as a therapist or a professional coach, who can help them identify thought and behavior patterns that may be holding them back from advancing in work or life. Ultimately, to break out of these self-limiting beliefs, it’s important to change one’s thinking, particularly in areas when self-abnegating thoughts have been dominating our lives for far too long. When I’m working with clients, I try to help them develop what’s called a “growth mindset”—that is, an inherent belief in one’s own ability to constantly learn new skills, gain new capabilities and improve. People who have a growth mindset do not see failures as the end of the road, or as confirmation of the self-limiting, critical beliefs they’ve internalized throughout their lives.


How AI and advanced computing can pull us back from the brink of accelerated climate change

AI is one of the significant tools left in the fight against climate change. AI has turned its hand to risk prediction, the prevention of damaging weather events, such as wildfires and carbon offsets. It has been described as vital to ensuring that companies meet their ESG targets. Yet, it’s also an accelerant. AI requires vast computing power, which churns through energy when designing algorithms and training models. And just as software ate the world, AI is set to follow. AI will contribute as much as $15.7 trillion to the global economy by 2030, which is greater than the GDP of Japan, Germany, India and the UK. That’s a lot of people using AI as ubiquitously as the internet, from using ChatGPT to craft emails and write code to using text-to-image platforms to make art. The power that AI uses has been increasing for years now. For example, the power required to train the largest AI models doubled roughly every 3.4 months, increasing 300,000 times between 2012 and 2018. This expansion brings opportunities to solve major real-world problems in everything from security and medicine to hunger and farming.


Unleashing the Power of Data Insights: Denodo Platform & the New Tableau GPT capability

When the Denodo Platform and Tableau GPT are integrated, Tableau customers can unlock several key benefits, including: Data Unification: The Denodo Platform’s logical data management capabilities provide Tableau GPT with a unified view of data from diverse sources. By integrating data silos and disparate systems, organizations can access a comprehensive, holistic data landscape within Tableau. The elimination of manual data consolidation simplifies the process of accessing and analyzing data, accelerating insights and decision-making. This significantly reduces the need for manual effort and enhances efficiency in data management. Expanded Data Access: The Denodo Platform’s ability to connect to a wide range of data sources means Tableau GPT can leverage an extensive array of structured and unstructured data. With connections to over 200 data sources, the Denodo Platform lets organizations tap into a comprehensive, distributed data ecosystem as easily and simply as connecting to a single data source.


Importance of quantum computing for reducing carbon emissions

Quantum computers have been an exciting tech development in recent times. They are exponentially faster than classical computers which makes them suitable for several applications in a wide variety of areas. However, they are still in their nascent stage of development, and even the most sophisticated machines are limited to a few hundred qubits. There is also the inherent problem of random fluctuations or noise—the loss of information held by qubits. This is one of the chief obstacles in the practical implementation of quantum computers. As a result, it takes more time for these noisy intermediate-scale quantum computers to perform complex calculations. Even the most basic reaction of CO2 with the simplest amine, ammonia, turns out to be too complex for these NISQs. VQE utilises a quantum computer to estimate the energy of a quantum system, while using a classical computer to optimise and suggest improvements to the calculation. One possible remedy to this problem is to combine quantum and classical computers, to overcome the problem of noise in quantum algorithms. 


Master the small daily improvements that set great leaders apart

When people talk about authentic leadership, what they’re really looking for is someone who practices what they preach. You don’t have to be successful at everything you’ll ask others to try, but you’ll need to have tried it. You’ll also need to understand how and when certain skills work, and when they don’t. Consider making time to take care of yourself. We tell folks that it’s important to take vacation time to recharge their batteries, but do we do the same? I had a colleague who would take a big splashy vacation every year. He’d make sure to tell everyone that there was no cellphone reception where he was going for that week. The other 51 weeks of the year? He’d respond instantly to all communications and always follow up with questions, sending messages day or night, seven days a week. The clear subtext was that outside of disappearing for one week a year, there was no expectation of taking time away. His message about time away from the office rang hollow to everyone around him. Great leaders make a point of disappearing often to take care of themselves in visible ways. 


Unleashing the Power of AI-Engineered DevSecOps

Implementing an AI-engineered DevSecOps solution comes with several potential pitfalls that can derail the process if not appropriately managed. Here are a few of them, along with suggestions for how to avoid them: Inadequate Planning and Alignment with Business Goals: Ignoring the strategic alignment between implementing AI-engineered DevSecOps and overall business goals can lead to undesirable outcomes. Clearly define the business objectives and how AI-engineered DevSecOps supports them. Outline expected outcomes and key performance indicators (KPIs) that align with business goals to guide the initiative. Neglecting Training and Upskilling: AI tools can be complex, and without proper understanding and training, their deployment may not yield desired results. Invest in training your teams on AI-engineered DevSecOps tools and techniques. Ensure they understand the functionalities of these tools and how to effectively use them. Upskilling your team will be crucial for leveraging AI capabilities. Ignoring Change Management: Introducing AI into DevSecOps is a significant change that can disrupt workflows and resistance from the team members. 


Scientists conduct first test of a wireless cosmic ray navigation system

It's similar to X-ray imaging or ground-penetrating radar, except with naturally occurring high-energy muons rather than X-rays or radio waves. That higher energy makes it possible to image thick, dense substance. The denser the imaged object, the more muons are blocked. The Muographix system relies on four muon-detecting reference stations above ground serving as coordinates for the muon-detecting receivers, which are deployed either underground or underwater. The team conducted the first trial of a muon-based underwater sensor array in 2021, using it to detect the rapidly changing tidal conditions in Tokyo Bay. They placed ten muon detectors within the service tunnel of the Tokyo Bay Aqua-Line roadway, which lies some 45 meters below sea level. They were able to image the sea above the tunnel with a spatial resolution of 10 meters and a time resolution of one meter, sufficient to demonstrate the system's ability to sense strong storm waves or tsunamis. The array was put to the test in September of that same year, when Japan was hit by a typhoon approaching from the south, producing mild ocean swells and tsunamis.


Five Steps to Principle-based Technology Transformation

Enterprise architecture frameworks prescribe using a set of principles to guide and align all architectural decisions within a particular environment. But how does one get to that set of principles, and how does it help to achieve some desired end state? Still, I believe in principles – chosen at the right time, using the proper context. They are much like having values in life – they allow you to test and focus decisions in complex environments; and also, provide a mechanism to explain technology decisions to business people. As principles guide decisions for future actions, they must ensure achievement of the transformation goals. But how does one determine the starting point in a complex environment, and how does one define the endpoint in the ever-changing landscape? I found these questions very perplexing until I realised that the success of a technology architecture is not about using any specific system/solution – but more about the CHARACTERISTICS of the environment – required by the Business to grow, prosper and achieve its strategic objectives.



Quote for the day:

"Success is not a random act. It arises out of a predictable and powerful set of circumstances and opportunities." -- Malcolm Gladwell

Daily Tech Digest - June 17, 2023

Borderless Data vs. Data Sovereignty: Can They Co-Exist?

Businesses have long understood that data sharing has limits (or borders). Legal separations keep data from various subsidiaries distinct or limit sharing between partners to specific data types. Multi-tenant software applications often require logical partitions to keep customer data private. What is rapidly changing are new data sovereignty laws, often cloaked as "data privacy" regulations, that enforce geographic boundaries on where data is processed and stored. Businesses must comply with the laws of each country where they operate, and data sovereignty presents a clear compliance challenge as companies hurry to rethink how and where they safely acquire personal data to share and protect. Countries enacting regulations keeping personal data inside their borders may deem their citizens' data of strategic national importance. More commonly, it's an enforcement mechanism that acknowledges personal data as an asset owned by individuals that businesses must use and share according to that country's laws. Recent data sovereignty requirements cannot be easily bypassed or pushed to the consumer's consent.


All change: The new era of perpetual organizational upheaval

With upsets coming from all directions—whether they be supply chain disruptions, surging inflation, or spikes in interest rates and energy prices—companies need to focus on being prepared and ready to act at all times. The key is not just to bounce out of crises, but to bounce forward—landing on their feet relatively unscathed and racing ahead with new energy. ... But it’s raising huge questions: How can companies provide structure and support to all employees regardless of where they are? How do they address the potential risks to company culture and the sense of belonging, as well as to collaboration and innovation? The pandemic exacerbated other trends, including the continuing skills mismatch in the labor market, which the onward march of technology is intensifying. It threw a harsh light on the challenge of workplace motivation—sometimes referred to as the “great attrition,” with workers leaving their jobs, or quiet quitting, essentially downscaling their efforts on the job.


A guide to becoming a Chief Information Security Officer: Steps and strategies

The technical skills are a must-have. Know all about network security, cloud security, identity access management, adopting and adapting infrastructure, along with tools and technologies that allow for the preservation of organizational data privacy, integrity and computing availability. Security engineers who are interested in becoming CISOs often focus on problem hunting. CISOs need to not only be able to find problems, but to identify problems and vulnerabilities that aren’t apparent to those around them. Learning to ask the right kinds of questions and thinking about issues in unconventional ways take time and practice. CISOs need to continuously update their mental models when it comes to thinking about cyber security. The mental model required for on-premise cyber security implementation is different from that required for the cloud. As an increasing number of automation and AI-based tools emerge, mental models will again need to be retrofitted. Many aspiring CISOs sell their technical credentials to prospective employers. This is important. 


TinyML computer vision is turning into reality with microNPUs (µNPUs)

Digital image processing—as it used to be called—is used for applications ranging from semiconductor manufacturing and inspection to advanced driver assistance systems (ADAS) features such as lane-departure warning and blind-spot detection, to image beautification and manipulation on mobile devices. And looking ahead, CV technology at the edge is enabling the next level of human machine interfaces (HMIs). HMIs have evolved significantly in the last decade. On top of traditional interfaces like the keyboard and mouse, we have now touch displays, fingerprint readers, facial recognition systems, and voice command capabilities. While clearly improving the user experience, these methods have one other attribute in common—they all react to user actions. The next level of HMI will be devices that understand users and their environment via contextual awareness. Context-aware devices sense not only their users, but also the environment in which they are operating, all in order to make better decisions toward more useful automated interactions. 


Intel Announces Release of ‘Tunnel Falls,’ 12-Qubit Silicon Chip

“Tunnel Falls is Intel’s most advanced silicon spin qubit chip to date and draws upon the company’s decades of transistor design and manufacturing expertise. The release of the new chip is the next step in Intel’s long-term strategy to build a full-stack commercial quantum computing system. While there are still fundamental questions and challenges that must be solved along the path to a fault-tolerant quantum computer, the academic community can now explore this technology and accelerate research development.” — Jim Clarke, director of Quantum Hardware, Intel Why It Matters: Currently, academic institutions don’t have high-volume manufacturing fabrication equipment like Intel. With Tunnel Falls, researchers can immediately begin working on experiments and research instead of trying to fabricate their own devices. As a result, a wider range of experiments become possible, including learning more about the fundamentals of qubits and quantum dots and developing new techniques for working with devices with multiple qubits.


What bank leaders should know about AI in financial services

While this technology has many exciting potential use cases, so much is still unknown. Many of Finastra’s customers, whose job it is to be risk-conscious, have questions about the risks AI presents. And indeed, many in the financial services industry are already moving to restrict use of ChatGPT among employees. Based on our experience as a provider to banks, Finastra is focused on a number of key risks bank leaders should know about. Data integrity is table stakes in financial services. Customers trust their banks to keep their personal data safe. However, at this stage, it’s not clear what ChatGPT does with the data it receives. This begs the even more concerning question: Could ChatGPT generate a response that shares sensitive customer data? With the old-style chatbots, questions and answers are predefined, governing what’s being returned. But what is asked and returned with new LLMs may prove difficult to control. This is a top consideration bank leaders must weigh and keep a close pulse on. Ensuring fairness and lack of bias is another critical consideration. 


Are public or proprietary generative AI solutions right for your business?

Internal large language models are interesting. Training on the whole internet has benefits and risks — not everyone can afford to do that or even wants to do it. I’ve been struck by how far you can get on a big pre-trained model with fine tuning or prompt engineering. For smaller players, there will be a lot of uses of the stuff [AI] that’s out there and reusable. I think larger players who can afford to make their own [AI] will be tempted to. If you look at, for example, AWS and Google Cloud Platform, some of this stuff feels like core infrastructure — I don’t mean what they do with AI, just what they do with hosting and server farms. It’s easy to think ‘we’re a huge company, we should make our own server farm.’ Well, our core business is agriculture or manufacturing. Maybe we should let the A-teams at Amazon and Google make it, and we pay them a few cents per terabyte of storage or compute. My guess is only the biggest tech companies over time will actually find it beneficial to maintain their own versions of these [AI]; most people will end up using a third-party service. 


Governance in the Age of Technological Innovation

To keep abreast of technological change and innovation, the board needs to ensure that its innovation and risk agendas are up-to-date, and that innovation is incorporated into the organisation’s strategy review. This may involve reviewing key performance indicators, performance measures and incentives. Within the board, the appropriate composition, culture and interactions can promote innovation. Not all board directors will have the relevant technical expertise, but more diverse boards can build collective literacy and enhance human capital in the boardroom, said De Meyer. Where necessary, committees such as scientific or innovation committees can be created to drive greater attention to these topics. In these cases, naming matters, said Janet Ang, non-executive Chair of the Institute of Systems Science in the panel discussion. For instance, referring to a committee as “Technology and Risk” instead of narrowly naming it as “IT” gives it more weight and scope. Fundamentally, boards should not only strive for conformance but also performance, urged Su-Yen Wong, Chair of the Singapore Institute of Directors. 


Can You Renegotiate Your Cloud Bill by Refusing to Pay It?

Hyperscalers in cloud continue to face questions about the cost and reliability of their services, especially in light of the brief AWS outage on June 13 that affected Southwest Airlines, McDonald’s, and The Boston Globe along with others. Further, some organizations face regulatory requirements that preclude the use of the cloud for certain datasets and transactions, Katz says. “There’s really no one-size-fits-all answer because every manufacturer, every organization, every company has different requirements.” There can be times when a cloud-first approach does not make sense for organizations. Katz says his company worked with a client whose dataset is very transactional with lots of changes and database read-writes. “We ran an assessment for them and going off to the public cloud was going to be eight times more expensive a month than keeping it on prem.” ... Much of the market is pushing toward a cloud-first world, but the economics could become challenging in the future. “At some point in time, the cost of doing business in the cloud is going to be exponentially higher, usually, than if you were to buy a depreciating asset and then kick it to the curb,” Katz says.


Red teaming can be the ground truth for CISOs and execs

What red teams can give CISOs is the cold, hard truth of how their network stacks up against threats that could be ruinous to the business. Red teams leave no stone unturned and pull on every thread until it unravels. This shines light on the vulnerabilities that will harm the finances or reputation of the business. With a red team, objective-based continuous penetration testing (led by experts that know attackers’ best tricks) can relentlessly scrutinize the attack surface to explore every avenue that could lead to a breakthrough. This proactive, “offensive security” approach will give a business the most comprehensive picture of their attack surface that money can buy, mapping out every possibility available to an attacker and how it can be remediated. It is also not limited to testing the technology stack; for businesses concerned that their employees are susceptible to social engineering attacks, red teams can emulate social engineering scenarios as part of their testing. A stringent social engineering assessment program should not be overlooked in favor of only scrutinizing weaknesses in IT infrastructure. 



Quote for the day:

"Leadership is just another word for training." -- Lance Secretan

Daily Tech Digest - June 15, 2023

The five new foundational qualities of effective leadership

Today’s leaders have to be able to establish a compelling destination and then navigate through the fog with a compass. “You have to be ready to make a decision today, realizing that you may get new data tomorrow that means you have to reverse the decision you just made,” a veteran CEO of a Fortune 25 company told us. “You have to have the courage to follow that new information. The job’s always been ambiguous. But the environment has never been this fluid.” Boards and CEOs expect succession candidates to be adept at providing direction and key performance indicators that will signal whether course adjustments are necessary. “We’re living in an age with many more discontinuities than we had a generation or two ago,” said Mark Thompson, former CEO of the New York Times Company and now board chairman of Ancestry. “It’s not about trying to find the perfect strategies. It’s more about helping organizations to be more open, flexible, and adaptable to change.” This shift demands a more dynamic, individual leadership approach, as well as a reimagining of basic organizational processes. 


5 best practices to ensure the security of third-party APIs

Maintaining an API inventory that automatically updates as code changes is an instrumental first step for an API security program, says Jacob Garrison, a security researcher at Bionic. This is an instrumental first step for an API security program; it should distinguish between first-party and third-party APIs. And it encourages continuous monitoring for shadow IT — APIs brought on board without notifying the security team. “To ensure your inventory is robust and actionable, you should track which APIs transmit business-critical information, such as personally identifiable information and payment card data,” he says. An API inventory is complementary to third-party risk management, according to Garrison. When developers utilize third-party APIs, it’s worthwhile to consider risk assessments of the vendors themselves. ... Frank Catucci, chief technology and head of security research for Invicti Security, agrees that including an inventory of third-party APIs is critical. "You need to have third-party APIs be part of your overall API inventory and you have to look at them as assets that you own, that you are responsible for," he says


Generative AI’s change management challenge

“The hardest part of AI acceptance is creating a space where employees can still add value and not feel they are competing with AI to create value,” Bellefonds added. “A lot of the work we do when it comes to change management and coaching is to help employees work with AI and at the same time, change the way they add value, so that a part of their job is taken by AI but their part refocuses on higher value-adding tasks.” Exactly how those processes are rewired and the working methods changed will vary from one enterprise to another, he said. There are other ways in which employees’ concerns about AI is unevenly distributed, too. Leaders are more likely to be optimistic, and frontline workers concerned, BCG found. And while 68% of leaders believe their companies have implemented adequate measures to ensure responsible use of AI, only 29% of their frontline employees feel that way. Despite BCG’s findings of optimism in the workforce, there’s a darker side. Over one-third of respondents think their job is likely to be eliminated by AI, and almost four-fifths want governments to step in and deliver AI-specific regulations to ensure it’s used responsibly.


As Machines Take Over — What Will It Mean to Be Human?

Biocomputing is a field of study that uses biologically-based molecules, such as DNA or proteins, to perform computational tasks. Imitating the genius of nature can completely shift the paradigm of understanding when it comes to the computation and storage of data. The field has shown promise in cryptography and drug discovery. However, biocomputers are still limited compared to non-bio computers since they aren't good at cooling themselves and doing more than two things simultaneously. Advancements in AI, however, have been booming. Since 2012, interest in AI, especially in machine learning, has been renewed, leading to a dramatic increase in funding and investment. Machine learning models ingest large amounts of data and infer patterns. More recently, generative AI has become extremely popular with the release of large AI models such as MidJourney, ChatGPT and Stable Diffusion. Generative AI is a class of AI algorithms that generate new data or content extremely similar to existing data, nearly identical to human-made data.


What is SDN and where is is going?

There are three main components to a software-defined network: controller, applications, and devices. The controller has taken over the role of the control plane on each individual network device. It populates the tables that the data planes on those devices use to do their work. There are various communication protocols that can be used for this purpose, including OpenFlow, though some vendors use proprietary protocols. Communication between the controller and devices is referred to as southbound APIs. The software controller is, in turn, managed by applications, which can fulfill any number of network administration roles, including load balancers, software-defined security services, orchestration applications, or analytics applications that keep tabs on what's going on in the network. These applications communicate with the controller (northbound APIs) through well-documented REST APIs that allow applications from different vendors to communicate with ease. 


Using Trauma-Informed Approaches in Agile Environments

Software is, by definition, very abstract. For this reason, we naturally tend to be in our heads and thoughts most of the time while at work. However, a more trauma-informed approach requires us to pay more attention to our physical state and not just to our brain and cognition. Our body and its sensations are giving us many signs, vital not just to our well-being but also to our productivity and ability to cognitively understand each other and adapt to changes. Paradoxically, in the end, paying more attention to our physical and emotional state gives us more cognitive resources to do our work. Noticing our bodily sensations at the moment, like breath or muscle tension in a particular area, can be a first step to getting out of a traumatic pattern. And a generally higher level of body awareness can help us fall less into such patterns in the first place. Simplified - our body awareness anchors us in the here and now, making it easier for us to recognize past patterns as inadequate for the current situation.


How Pyramid Thinking Can Revolutionize Your Data Strategy

Before devising a corporate data strategy, the main things you need to know are the strategy and objectives of your organization as a whole. Data can be a truly transformative tool, but even the sharpest knife needs to be used accurately to get the best results -- which is why you need to know the end goal before you can understand how data can help you achieve it. This end goal forms the very peak of the pyramid and it is by looking downwards from it that you can understand the role that data can play. For organizations struggling to pinpoint that goal (as oftentimes happens when the business strategy isn’t well-defined and documented), it is worth considering key business problems and the consequent opportunities for improvement. ... Identifying business goals gives you the basis upon which to build your data strategy, and with that you can begin to be more specific about the change you are looking to make. An actionable and measurable formula helps you shape those changes with clarity, such as “we want to do x by measuring/tracking/analyzing y in order to do z.”


Network spending priorities for second-half 2023

Security is the area where most users expect to spend more, but at the same time an area where they believe their spending is most likely to be sub-optimal. Three-quarters of buyers think they already spend too much on security because they’ve layered things on without considering the whole picture. You hear terms like “holistic approach” or “rethinking” a lot in their comments, but at the same time, less than an eighth of the users expect to redo their security strategies in any way.  ... The reasons for the seemingly mindless AI enthusiasm is a simple reversal of an old saying: “Where there’s hope, there’s life.” AI could (theoretically) reduce operator errors. It could (hopefully) improve network capacity planning. It could (presumably) help secure applications and data and spot malefactors. All these things are recurring problems that seem to defy solution, and AI offers a hope that a solution might be near at hand. What’s not to love, provisionally of course.


Biodiversity Means Business

Technology can play a key role in navigating biodiversity issues. Predictive analytics, machine learning, digital twins, blockchain and the Internet of Things can deliver insight, visibility and measurability into sourcing, supply chains and environmental impacts. However, Katic emphasizes that these tools must be used to drive real change. “They must support a paradigm shift to new, sustainable models of development, rather than entrenching business as usual. They must deliver enhanced transparency and accountability,” she says. Ultimately, companies must imbed biodiversity deep into their business strategies and daily operations, Katic says. This includes the use of science based methods that revolve around the UN’s Sustainable Development Goals and its Global Biodiversity Framework. It can also incorporate tools such as the S&P’s scoring system, part of its UN-linked GlobalSustainable1 initiative, which provides dependency scores, ecosystem footprint insights, and other biodiversity data that can guide decision-making. In addition, the SBTN framework can serve as a valuable resource. More than 200 organizations helped shape the initial set of methods, tools, and guidance.


5 roadblocks to Rust adoption in embedded systems

Rust is not a trivial language to learn. While it does share common ideas and concepts with many of the languages that came before it, including C, the learning curve is steeper. When a company looks to adopt a new language, they hire engineers who already know the technology or are forced to train their team. Teams interested in using Rust for embedded will find themselves in a small, niche community. Within this community, not many qualified embedded software engineers know Rust. That means paying a premium for the few developers who know Rust or investing in training the existing internal team. Training a team to use Rust isn’t a bad idea. Every company and developer should be investing in themselves constantly. Our field changes so rapidly that you’ll quickly get left behind if you don’t. However, switching from one programming language to another must provide a return on investment for the company. Especially when switching to an immature language like Rust. 



Quote for the day:

"Don't focus so much on who is following you, that you forget to lead." -- E'yen A. Gardner

Daily Tech Digest - June 14, 2023

Malicious hackers are weaponizing generative AI

The headline here is not that this new threat exists; it was only a matter of time before threats powered by generative AI power showed up. There must be some better ways to fight these types of threats that are likely to become more common as bad actors learn to leverage generative AI as an effective weapon. If we hope to stay ahead, we will need to use generative AI as a defensive mechanism. This means a shift from being reactive (the typical enterprise approach today), to being proactive using tactics such as observability and AI-powered security systems. The challenge is that cloud security and devsecops pros must step up their game in order to keep out of the 24-hour news cycles. This means increasing investments in security at a time when many IT budgets are being downsized. If there is no active response to managing these emerging risks, you may have to price in the cost and impact of a significant breach, because you’re likely to experience one. Of course, it’s the job of security pros to scare you into spending more on security or else the worst will likely happen.


Avoiding the Pain of a ‘Resume-Driven Architecture’

A resume-driven architecture occurs when the interests of developers lead them to designs that no longer align with maximized impacts and outcomes for the organization. Often, the developer clings to a technology that provides them a greater level of control and, at least initially, a higher salary. Meanwhile, the organization gets an architecture that only a handful of people know how to manage and maintain, limiting the available talent pool and hindering future innovation. ... There’s no sense in investing resources in a bespoke architecture if it’s not providing you with any differentiation—especially when competitors are achieving the same outcome with fewer resources. Moreover, getting stuck in a Stage Two mindset when the field moves on to Stage Three (or, worse, Stage Four) and cuts you off from the next wave of innovation. Subsequent technology breakthroughs often build on top of—and interoperate with—the previous technology layers. If you’re stuck with a custom architecture when the industry has moved on, you can miss out on the next wave of innovation and fall further behind competitors.


In the Great Microservices Debate, Value Eats Size for Lunch

A key criterion for a service to be standing alone as a separate code base and a separately deployable entity is that it should provide some value to the users — ideally the end users of the application. A useful heuristic to determine whether or not a service satisfies this criterion is to think about whether most enhancements to the service would result in benefits perceivable by the user. If in a vast majority of updates the service can only provide such user benefit by having to also get other services to release enhancements, then the service has failed the criterion. ... Providing value is also about the cost efficiency of designing as multiple services versus combining as a single service. One such aspect that was highlighted in the Prime Video case was chatty network calls. This could be a double whammy because it not only results in additional latency before a response goes back to the user, but it might also increase your bandwidth costs. This would be more problematic if you have large or several payloads moving around between services across network boundaries. 


Enhancing Code Reviews with Conventional Comments

In software development, code reviews are a vital practice that ensures code quality, promotes consistency, and fosters knowledge sharing. Yet, at times, they can drive me absolutely bananas! However, the effectiveness of code reviews is contingent on clear, concise communication. This is where Conventional Comments play a pivotal role. Conventional Comments provide a standardized method of delivering and receiving feedback during code reviews, reducing misunderstandings and promoting more efficient discussions. Conventional Comments are a structured commenting system for code reviews and other forms of technical dialogue. They establish a set of predefined labels, such as nitpick, issue, suggestion, praise, question, thought, and notably, non-blocking. Each label corresponds to a specific comment type and expected response. ... By standardizing labels and formats, Conventional Comments enhance the clarity of comments, eliminating vague language and misunderstandings, ensuring all participants understand the intent and meaning of the comments.


How the modern CIO grapples with legacy IT

When reviewing products and services, Abernathy considers whether a technology still fits into requirements for simplicity of geographies, designs, platforms, applications, and equipment. “Driving for simplicity is of paramount importance because it increases quality, stability, value, agility, talent engagement and security,” she says. Other red flags for replacement include point solutions, duplicative solutions, or technologies that become very challenging because of unreasonable pricing models, inadequate support or instability. In some ways, moving to SaaS-based applications makes the review process simpler because decisions as to whether and when to update and refactor are up to the provider, Ivy-Rosser says. But while technology change decisions are the responsibility of the provider, if you’re modernizing in a hybrid world, you need to make sure your data is ready to move and that any changes don’t create privacy issues. With SaaS, the review should take a hard look at the issues surrounding ownership and control.


The psychological impact of phishing attacks on your employees

The aftermath of a successful phishing attack can be emotionally draining, leaving people feeling embarrassed and ashamed. The fear of accidentally clicking a phishing email can affect a person’s performance and productivity at work. Even simulated phishing attacks can cause stress when employees are lured with fake promises of bonuses or freebies. Furthermore, when phishing emails repeatedly get through security measures and are not neutralized, employees may view these as safe and click on them. This could ultimately lead to employees losing faith in their employer’s ability to protect them. ... Organizations owe it to their employees to be proactive. To ensure employees are protected, they should implement advanced technology that uses Artificial Intelligence and Machine Learning models, such as Natural Language Processing (NLP) and Natural Language Understanding. These tools can detect even the most advanced phishing attempts and will serve as a safety net.


Cyber liability insurance vs. data breach insurance: What's the difference?

Understanding the distinction is important, as cyber insurance is becoming an integral part of the security landscape. Many companies may have no choice but to find insurance as more organizations are requiring that their business partners have cyber coverage. Many traditional business insurance policies will simply not cover cyber incidents, considering them outside the scope of the agreement, which is why cyber insurance has become a separate form of protection. It’s also important to note that getting insurance isn’t guaranteed — insurers are increasingly asking for more proof that strong cybersecurity strategies are in place before agreeing to provide coverage. Many companies may have no choice but to meet such terms. Put simply, cyber liability insurance refers to coverage for third-party claims asserted against a company stemming from a network security event or data breach. Data breach insurance, on the other hand, refers to coverage for first-party losses incurred by the insured organization that has suffered a loss of data.
These leaders recognize that transformation investments remain critical to any business, and they plan to emerge from these volatile times armed with new business models and revenue streams. In short, they plan to continue winning through transformation, and they are laser-focused about how they will do it. You might even say they’re “outcomes obsessed.” ... Remember, your goal is to prune the tree so it can thrive—not just to go around sawing off branches. Any cuts must set up individuals, teams, and departments for long-term success, despite the short-term pain. One way I’ve seen successful leaders do this is by taking the choices they are considering (both cutting investments and expanding them) and mapping them out in terms of their expected financial and nonfinancial impact ... Top-performing companies look beyond functional excellence, and instead aim for enterprise-level reinvention that extends across the company’s business, operating, and technology models. You should too. These transformations enable you to strengthen ecosystems, close capability gaps, and better chart your future revenue streams. 


Don't Let Age Mothball Your IT Career

Age discrimination is a significant concern in the IT industry, Schneer says. “Some companies may prioritize younger workers who are perceived to be more tech-savvy and adaptable,” she notes. “However, experienced professionals bring valuable skills and knowledge that can be an asset to any organization.” Weitzel observes that it's difficult to know how prevalent age discrimination is in any industry. “But applicants can be proactive in combatting any false assumptions by showcasing upfront the current skills and recent experience that employers are seeking.” Age discrimination may be more prevalent in certain IT fields, such as software development or web design, where rapid advancements in technology can make older professionals feel less relevant, Schneer says. “However, roles that require extensive experience and expertise, such as IT management or cybersecurity, may be less susceptible to age bias.” When encountering suspected age bias, senior IT workers should document any incidents or patterns of behavior that suggest discrimination, Schneer advises.


Thinking Deductively to Understand Complex Software Systems

The main goal is to think through the role of tests in helping you understand complex code, especially in cases where you are starting from a position of unfamiliarity with the code base. I think most of us would agree that tests allow us to automate the process of answering a question like "Is my software working right now?". Since the need to answer this question comes up all the time, at least as frequently as you deploy, it makes sense to spend time automating the process of answering it. However, even a large test suite can be a poor proxy for this question since it can only ever really answer the question "Do all my tests pass?". Fortunately, tests can be useful in helping us answer a larger range of questions. In some cases they allow us to dynamically analyse code, enabling us to glean a genuine understanding of how complex systems operate, that might otherwise be hard won.



Quote for the day:

"To do great things is difficult; but to command great things is more difficult." -- Friedrich Nietzsche

Daily Tech Digest - June 13, 2023

AI and tech innovation, economic pressures increase identity attack surface

In the new attack observed by Microsoft, the attackers, which the company track under the temporary Storm-1167 moniker, used a custom phishing toolkit they developed themselves and which uses an indirect proxy method. This means the phishing page set up by the attackers does not serve any content from the real log-in page but rather mimics it as a stand-alone page fully under attackers' control. When the victim interacts with the phishing page, the attackers initiate a login session with the real website using the victim-provided credentials and then ask for the MFA code from the victim using a fake prompt. If the code is provided, the attackers use it for their own login session and are issued the session cookie directly. The victim is then redirected to a fake page. This is more in line with traditional phishing attacks. "In this AitM attack with indirect proxy method, since the phishing website is set up by the attackers, they have more control to modify the displayed content according to the scenario," the Microsoft researchers said.


Revolutionizing DevOps With Low-Code/No-Code Platforms

With non-IT professionals developing applications, there is a higher risk of introducing vulnerabilities that could compromise the security of the application and the organization. Additionally, the lack of oversight and governance could lead to poor coding practices and technical debt. For instance, the use of new-generation iPaaS platforms by citizen integrators has made it difficult for security leaders to have full visibility into the organization’s valuable assets. Attackers are aware of this and have already taken advantage of improperly secured app-to-app connections in recent supply chain attacks, such as those experienced by Microsoft and GitHub. ... As organizations try to integrate low-code and no-code applications with legacy systems or other third-party applications, technical challenges can arise. For example, if an organization wants to integrate a low-code application with an existing ERP system, it may face challenges in terms of data mapping and synchronization. Some low-code and no-code applications are built to export data and share it well, but when it comes to integrating event triggers, business logic, or workflows, these software solutions hit limits. 


Rethinking AI benchmarks: A new paper challenges the status quo of evaluating AI

One of the key problems that Burnell and his co-authors point out is the use of aggregate metrics that summarize an AI system’s overall performance on a category of tasks such as math, reasoning or image classification. Aggregate metrics are convenient because of their simplicity. But the convenience comes at the cost of transparency and lack of detail on some of the nuances of the AI system’s performance on critical tasks. “If you have data from dozens of tasks and maybe thousands of individual instances of each task, it’s not always easy to interpret and communicate those data. Aggregate metrics allow you to communicate the results in a simple, intuitive way that readers, reviewers, or — as we’re seeing now — customers can quickly understand,” Burnell said. “The problem is that this simplification can hide really important patterns in the data that could indicate potential biases, safety concerns, or just help us learn more about how the system works, because we can’t tell where a system is failing.”


A Practical Guide for Container Security

Developers and DevOps teams have embraced the use of containers for application deployment. In a report, Gartner stated, "By 2025, over 85% of organizations worldwide will be running containerized applications in production, a significant increase from less than 35% in 2019." On the flip side, various statistics indicate that the popularity of containers has also made them a target for cybercriminals who have been successful in exploiting them. According to a survey released in a 2023 State of Kubernetes security report by Red Hat, 67% of respondents stated that security was their primary concern when adopting containerization. Additionally, 37% reported that they had suffered revenue or customer loss due to a container or Kubernetes security incident. These data points emphasize the significance of container security, making it a critical and pressing topic for discussion among organizations that are currently using or planning to adopt containerized applications.


6 finops best practices to reduce cloud costs

Centralizing cloud costs from public clouds and data center infrastructure is a key finops concern. The first thing finops does is to create a single-pane view of consumption, which enables cost forecasting. Finops platforms can also centralize operations like shutting down underutilized resources or predicting when to shift off higher-priced reserved cloud instances. Platforms like Apptio, CloudZero, HCMX FinOps Express, and others can help with shift-left cloud cost optimizations. They also provide tools to catalog and select approved cloud-native stacks for new projects. ... “Today’s developers now have a choice between monolithic cloud infrastructure that locks them in and choosing to assemble cloud infrastructure from modern, modular IaaS and PaaS service providers,” says Kevin Cochrane, chief marketing officer of Vultr. “By choosing the latter, they can speed time to production, streamline operations, and manage cloud costs by only paying for the capacity they need.” As an example, a low-usage application may be less expensive to set up, run, and manage on AWS Lambda with a database on AWS RDS, rather than running it on AWS EC2 reserved instances.


Artificial Intelligence: A Board of Directors Challenge – Part II

It is essential for organizations to dedicate time and effort to consider the potential unintended consequences or “unknown unknowns” of AI deployments. This will prevent adverse outcomes that may arise if AI is deployed without proper consideration. To achieve this, it is necessary to understand the Rumsfeld Knowledge Matrix. The Rumsfeld Knowledge Matrix is a conceptual framework introduced by Donald Rumsfeld, the former United States Secretary of Defense, to categorize and analyze knowledge and information based on different levels of certainty and awareness. The matrix consists of four quadrants: Known knowns: These are things that we know and are aware of. They represent information that is well understood and can be easily articulated. I call these “Facts.” Known unknowns: These are things that we know we don’t know. In other words, there are gaps in our knowledge or information which we are aware of and recognize as areas where further research or investigation is needed. We need to ask These ” Questions “


How to achieve cyber resilience?

Instead of relegating security development to a forgettable annual calendar reminder, a continuous approach must keep security at the forefront of mind throughout the year. Security threats also need to be brought to life with realistic simulation exercises. This approach will provide a much more engaging experience for participants and a far more accurate indication of their abilities. Real-life exercises give far more insight into an individual’s mindset and potential than a certification’s often rote, static nature. Security teams must be ready to respond rapidly and confidently to the latest emerging threats, aligned with industry best practices. They must have the right skills, from closing off newly discovered zero days, to mitigating serious incoming threats like attacks exploiting Log4Shell. But they must also be able to apply them calmly and in control even if they face a looming crisis. This capability can only be developed through continuous exercise.


The IT talent flight risk is real: Are return-to-office mandates the right solution?

Most workers require location flexibility when considering a job change. In addition, most workers in an IT function would only consider a new job or position that allows them to work from a location of their choosing. Requiring employees to return fully on-site is also a risk to DEI. Underrepresented groups of talent have seen improvements in how they work since being allowed more flexibility. For example, most women who were fully on-site prior to the pandemic, but have been remote since, report their expectations for working flexibly have increased since the beginning of the pandemic. Employees with a disability have also found a vast improvement to the quality of their work experience. Since the pandemic, Gartner research shows that knowledge workers with a disability have found the extent to which their working environment helps them be productive has improved. In a hybrid environment for this population, perceptions of equity have also improved, as they have experienced higher levels of respect and greater access to managers.


Common Cybersecurity Risks to ICS/OT Systems

Protecting ICS/OT systems from cyberthreats is crucial for ensuring the resilience of critical infrastructure. Recent cyberattacks on ICS/OT systems have highlighted the potential impact of these attacks on critical infrastructure and the need for organizations to prioritize cybersecurity for their ICS/OT systems. By being aware of common cybersecurity risks and taking proactive steps to mitigate them, organizations can protect their ICS/OT systems and maintain operational resilience. The above-mentioned incidents demonstrate that cyberattacks on ICS/OT systems can cause physical harm, financial losses and public safety risks. Organizations must protect their ICS/OT systems from cyberthreats, such as conducting regular vulnerability assessments, implementing network segmentation and providing employee training on cybersecurity best practices. Compliance with relevant regulations and standards and collaboration between IT and OT teams can also help mitigate cybersecurity risks to ICS/OT systems.


10 emerging innovations that could redefine IT

The most common paradigm for computation has been digital hardware built of transistors that have two states: on and off. Now some AI architects are eyeing the long-forgotten model of analog computation where values are expressed as voltages or currents. Instead of just two states, these can have almost an infinite number of values, or at least as much as the precision of the system can measure accurately. The fascination in the idea comes from the observation that AI models don’t need the same kind of precision as, say, bank ledgers. If some of the billions of parameters in a model drift by 1%, 10% or even more, the others will compensate and the model will often still be just as accurate overall. ... The IT department has a big role in this debate as it tests and deploys the second and third generation of collaboration tools. Basic video chatting is being replaced by more purpose-built tools for enabling standup meetings, casual discussions, and full-blown multi-day conferences. The debate is not just technical. Some of the decisions are being swayed by the investment that the company has made in commercial office space.



Quote for the day:

"When you accept a leadership role, you take on extra responsibility for your actions toward others." -- Kelley Armstrong