Daily Tech Digest - July 17, 2022

The Shared Responsibility of Taming Cloud Costs

The cost of cloud impacts the bottom line and therefore, cloud cost management cannot be the job of the CIO alone. It’s important to create a culture or framework where managing cloud costs is a shared responsibility among business, product, and engineering teams, and where it’s a consideration throughout the software development process and in IT operations. In order to do just this, it’s important to shift education left. Like many DevOps principles, “shift-left” once had a specific meaning that has become more generalized over time. At its core, the idea of shifting left is to be proactive when it comes to cost management in all management and operational processes. It means empowering developers and making operational considerations a key part of application development. Change management must be connected in the context of cost. If organizations educate and empower developers to understand the impact of cloud cost as software is written, they will reap the benefits of building more cost effective software that improves operational visibility and control.


How AI Regulations Are Shaping Its Current And Future Use

Examining some of the many laws that have been passed in relation to AI, I have identified some of the best practices for both statewide and nationwide regulation. On a national level, it is crucial to both develop public trust in AI as well as have advisory boards to monitor the use of AI. One such example is having specific research teams or committees dedicated to identifying and studying deepfakes. In the U.S., Texas and California have legally banned the use of deepfakes to influence elections, and the EU created a self-regulating Code of Practice on Disinformation for all online platforms to achieve similar results. Another necessity is to have an ethics committee that monitors and advises the use of AI in digitization activities, a practice currently in place in Belgium (pg. 179). Specifically, this committee encourages companies that use AI to weigh the costs and benefits of implementation compared to the systems that will get replaced. Finally, it’s important to promote public trust in AI on a national level.


5 key considerations for your 2023 cybersecurity budget planning

The cost of complying with various privacy regulations and security obligations in contracts is going up, Patel says. “Some contracts might require independent testing by third-party auditors. Auditors and consultants are also raising fees due to inflation and rising salaries,” he says. ... “When an organization is truly secure, the cost to achieve and maintain compliance should be reduced,” he says. Evolving regulatory compliance requirements, especially for those organizations supporting critical infrastructure, require significant support, Chaddock says. “Even the effort to determine what needs to happen can be costly and detract from daily operations, so plan for increased effort to support regulatory obligations if applicable,” he says. ... If paying for such policies comes out of the security budget, CISOs will need to take into consideration the rising costs of coverage and other factors. Companies should be sure to include the cost of cyber insurance over time, and more important the costs associated with maintaining effective and secure backup/restore capabilities, Chaddock says.


CISA pulls the fire alarm on Juniper Networks bugs

The networking and security company also issued an alert about critical vulnerabilities in Junos Space Security Director Policy Enforcer — this piece provides centralized threat management and monitoring for software-defined networks — but noted that it's not aware of any malicious exploitation of these critical bugs. While the vendor didn't provide details about the Policy Enforcer bugs, they received a 9.8 CVSS score, and there are "multiple" vulnerabilities in this product, according to the security bulletin. The flaws affect all versions of Junos Space Policy Enforcer prior to 22.1R1, and Juniper said it has fixed the issues. The next group of critical vulnerabilities exist in third-party software used in the Contrail Networking product. In this security bulletin, Juniper issued updates to address more than 100 CVEs that go back to 2013. Upgrading to release 21.4.0 fixes the Open Container Initiative-compliant Red Hat Universal Base Image container image from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8, the vendor explained in the alert.


HTTP/3 Is Now a Standard: Why Use It and How to Get Started

As you move from one mast to another, from behind walls that block or bounce signals, connections are commonly cut and restarted. This is not what TCP likes — it doesn’t really want to communicate without formal introductions and a good firm handshake. In fact, it turns out that TCP’s strict accounting and waiting for that last stray packet just means that users have to wait around for webpages to load and new apps to download, or a connection timeout to be re-established. So to take advantage of the informality of UDP, and to allow the network to use some smart stuff on-the-fly, the new QUIC (Quick UDP Internet Connections) format got more attention. While we don’t want to see too much intelligence within the network itself, we are much more comfortable these days with automatic decision making. QUIC understands that a site is made up of multiple files, and it won’t blight the entire connection just because one file hasn’t finished loading. The other trend that QUIC follows up on is built-in security. Whereas encryption was optional before (i.e. HTTP or HTTPS) QUIC is always encrypted.


The enemy of vulnerability management? Unrealistic expectations

First and most importantly, you need to be realistic. Many organizations want critical vulnerabilities fixed within seven days. That is not realistic if you only have one maintenance window per month. Additionally, if you do not have the ability to reboot all your systems every weekend, you are setting yourself up for failure. If you only have one maintenance window per month, there is no reason to set a due date on critical vulnerabilities any less than 30 days. For obvious reasons, organizations are nervous about speaking publicly about how quickly they remediate vulnerabilities. One estimate states that the mean time to remediate for private sector organizations is between 60 and 150 days. You can get into that range by setting due dates of 30, 60, 90, and 180 days for severities of critical, high, medium, and low, respectively. Better yet, this is achievable with a single maintenance window each month. As someone who has worked on both sides of this problem, getting it fixed eventually is more important than taking a hard line on getting it fixed lightning fast, and then having it sit there partially fixed indefinitely. Setting an aggressive policy that your team cannot deliver on looks tough.


‘Callback’ Phishing Campaign Impersonates Security Firms

Researchers likened the campaign to one discovered last year dubbed BazarCall by the Wizard Spider threat group. That campaign used a similar tactic to try to spur people to make a phone call to opt-out of renewing an online service the recipient purportedly is currently using, Sophos researchers explained at the time. If people made the call, a friendly person on the other side would give them a website address where the soon-to-be victim could supposedly unsubscribe from the service. However, that website instead led them to a malicious download. ... Researchers did not specify what other security companies were being impersonated in the campaign, which they identified on July 8, they said. In their blog post, they included a screenshot of the email sent to recipients impersonating CrowdStrike, which appears legitimate by using the company’s logo. Specifically, the email informs the target that it’s coming from their company’s “outsourced data security services vendor,” and that “abnormal activity” has been detected on the “segment of the network which your workstation is a part of.”


The next frontier in cloud computing

Terms that are beginning to emerge, such as “supercloud,” “distributed cloud,” “metacloud” (my vote), and “abstract cloud.” Even the term “cloud native” is up for debate. To be fair to the buzzword makers, they all define the concept a bit differently, and I know the wrath of defining a buzzword a bit differently than others do. The common pattern seems to be a collection of public clouds and sometimes edge-based systems that work together for some greater purpose. The metacloud concept will be the single focus for the next 5 to 10 years as we begin to put public clouds to work. Having a collection of cloud services managed with abstraction and automation is much more valuable than attempting to leverage each public cloud provider on its terms rather than yours. We want to leverage public cloud providers through abstract interfaces to access specific services, such as storage, compute, artificial intelligence, data, etc., and we want to support a layer of cloud-spanning technology that allows us to use those services more effectively. A metacloud removes the complexity that multicloud brings these days.


A CIO’s guide to guiding business change

When it comes to supporting business change, the “it depends answer” amounts to choosing the most suitable methodology, not the methodology the business analyst has the darkest belt in. But on the other hand, the idea of having to earn belts of varying hue or their equivalent levels of expertise in several of these methodologies, just so you can choose the one that best fits a situation, might strike you as too intimidating to bother with. Picking one to use in all situations, and living with its limitations, is understandably tempting. If adding to your belt collection isn’t high on your priority list, here’s what you need to know to limit your hold-your-pants-up apparel to suspenders, leaving the black belts to specialists you bring in for the job once you’ve decided which methodology fits your situation best. Before you can be in a position to choose, keep in mind the six dimensions of process optimization: Fixed cost, incremental cost, cycle time, throughput, quality, and excellence. You need to keep these center stage, because: You can only optimize around no more than three of them; the ones you choose have tradeoffs; and each methodology is designed to optimize different process dimensions.


7 Reasons to Choose Apache Pulsar over Apache Kafka

Apache Pulsar is like two products in one. Not only can it handle high-rate, real-time use cases like Kafka, but it also supports standard message queuing patterns, such as competing consumers, fail-over subscriptions, and easy message fan out. Apache Pulsar automatically keeps track of the client's read position in the topic and stores that information in its high-performance distributed ledger, Apache BookKeeper. Unlike Kafka, Apache Pulsar can handle many of the use cases of a traditional queuing system, like RabbitMQ. So instead of running two systems — one for real-time streaming and one for queuing — you do both with Pulsar. It’s a two-for-one deal, and those are always good. ... Well, with Apache Pulsar it can be that simple. If you just need a topic, then use a topic. You don’t have to specify the number of partitions or think about how many consumers the topic might have. Pulsar subscriptions allow you to add as many consumers as you want on a topic with Pulsar keeping track of it all.



Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.

Daily Tech Digest - July 15, 2022

Large-Scale Phishing Campaign Bypasses MFA

In the phishing campaign observed by Microsoft researchers, attackers initiate contact with potential victims by sending emails with an HTML file attachment to multiple recipients in different organizations. The messages claim that the recipients have a voicemail message and need to click on the attachment to access it or it will be deleted in 24 hours. If a user clicks on the link, they are redirected to a site that tells them they will be redirected again to their mailbox with the audio in an hour. Meanwhile, they are asked to sign in with their credentials. At this point, however, the attack does something unique using clever coding by automatically filling in the phishing landing page with the user’s email address, “thus enhancing its social engineering lure,” researchers noted. If a target enters his or her credentials and gets authenticated, he or she is redirected to the legitimate Microsoft office.com page. However, in the background, the attacker intercepts the credentials and gets authenticated on the user’s behalf, providing free reign to perform follow-on activities, researchers said.


Mergers and acquisitions put zero trust to the ultimate test

Zero trust is getting a hard look by enterprises that are pushing more workloads into the cloud and edge amid more employees working remotely, all of which are beyond the boundaries datacenter security. The architecture assumes that no user, device, or application on the network can be trusted. Instead, a zero-trust framework relies on identity, behavior, authentication, and policies to verify and validate everything on the network and to determine such issues as access and privileges. ... "When a company [buys another], they have to identify which applications of the acquired company they should keep and which they should eliminate," he said. "Then, for a period of time, the acquired company will only give them limited access to applications in the acquiring company and vice-versa. To do so, traditionally they have to bring the two corporate networks together. When they integrate corporate networks, it creates problems. "Each site has the same IP address name. They call them 'overlapping IP addresses.' Now they have to rename and create the stuff. It takes time, money and effort."


8 servant leadership do’s and don’ts

Being a servant leader doesn’t mean giving up control or “letting people do whatever they want,” Dotlich says. “I don’t think it means that you do whatever [employees] ask either, which is how we normally think of ‘servants.’ But it is really facilitating people’s performance, goals, achievements, and aspirations. In that way you’re serving who they want to be or what they want to achieve.” ... During periods of high pressure, “sometimes we as leaders want to keep pushing forward but that’s exactly the wrong thing to do,” Reis says. “Sometimes it’s just better to take a minute, reframe, and then re-engage.” Leaders can also show empathy with feedback, he says. “It would be easy to hear a list of complaints and for defensiveness to set in,” Reis says. “But the empathy is in understanding that the issues being raised are part of the teammates’ sincere desire to make things better. You’re empathizing with that frustration and really hearing that,” he says. ... It’s important for each organization to define servant leadership “in a way that works in your own system, that people understand and that is not misleading,” Dotlich says.


Researchers trained an AI model to ‘think’ like a baby, and it suddenly excelled

Typically, AI models start with a blank slate and are trained on data with many different examples, from which the model constructs knowledge. But research on infants suggests this is not what babies do. Instead of building knowledge from scratch, infants start with some principled expectations about objects. For instance, they expect if they attend to an object that is then hidden behind another object, the first object will continue to exist. This is a core assumption that starts them off in the right direction. Their knowledge then becomes more refined with time and experience. The exciting finding by Piloto and colleagues is that a deep-learning AI system modelled on what babies do, outperforms a system that begins with a blank slate and tries to learn based on experience alone. ... If you show an infant a magic trick where you violate this expectation, they can detect the magic. They reveal this knowledge by looking significantly longer at events with unexpected, or “magic” outcomes, compared to events where the outcomes are expected.


12 Ways to Improve Your Monolith Before Transitioning to Microservices

A rewrite is never an easy journey, but by moving from monolith to microservices, you are changing more than the way you code; you are changing the company’s operating model. Not only do you have to learn a new, more complex tech stack but management will also need to adjust the work culture and reorganize people into smaller, cross-functional teams. How to best reorganize the teams and the company are subjects worthy of a separate post. In this article, I want to focus on the technical aspects of the migration. First, it’s important to research as much as possible about the tradeoffs involved in adopting microservices before even getting started. You want to be absolutely sure that microservices (and not other alternative solutions such as modularized monoliths) are the right solution for you. ... During development, you’ll not only be constantly shipping out new microservices but also re-deploying the monolith. The faster and more painless this process is, the more rapidly you can progress. Set up continuous integration and delivery (CI/CD) to test and deploy code automatically.


A Data Professional without Business Acumen Is Like a Sword without a Handle

In my journey to become an impactful data professional, I’ve found three statements to be an excellent pivot:Identify what you love doing in your career, and more importantly, what you do not. It is okay to feel overwhelmed by the depth data science and analytics has to offer. Start small with the basics, and build your way up to complex projects at your own pace. Read what people are working on. That can inspire you, set expectations, and introduce you to the latest and greatest in the data community. Take time to create your value proposition as a data person and work to be the subject matter expert for a niche. Be the pacesetter of goals for people to turn to you for knowledge, advice, or to get stuff done. Also, a data professional without business acumen is like a sword without a handle. The ability to translate business problems into data and connect it back to business impact is compelling and much appreciated in today’s world. If all of these still don’t connect with you, there are plenty of other roles in data beyond data scientists and analysts! There’s a lot in store for a technology enthusiast today.


Making sense of data with low-code environments

A serious low-code environment provides data scientists flexibility around the tools they use. At the same time, it allows focus on the interesting parts of their job, while abstracting away from tool interfacing and different versions of involved libraries. A good environment lets data scientists reach out to code if they want to, but ensures they do not have to touch code every time they want to control the interna of an algorithm. Essentially, this allows visual programming of a data flow process — data science done for real is complex, after all. If done right, the low-code environment continues to allow access to new technologies, making it future proof for ongoing innovations in the field. But the best low-code environments also ensure backward compatibility and include a mechanism to easily package and deploy trained models together with all the necessary steps for data transformations into production. ... The business people often complain that the data folks work slowly, don’t quite understand the real problem and, at the end of it all, don’t quite arrive at the answer the business side was looking for. 


Technology is providing the resilience that businesses need at uncertain times

From the blockchain to the Metaverse to emotional AI, digital technologies are rapidly advancing at a time when enterprises face more pressure than ever to innovate to gain a competitive advantage. . How can companies apply human-centric technologies to transform the future of their business? Radically Human, a new book from Accenture Technology leaders Paul Daugherty and H. James Wilson, offers business leaders an easy-to-understand breakdown of today's most advanced human-inspired technologies and an actionable IDEAS framework that will help you approach innovation in a completely new way. In Radically Human, Daugherty and Wilson show this profound shift, fast-forwarded by the pandemic, toward more human -- and more humane -- technology. The book introduces us to a new innovation framework and the basic building blocks of business -- Intelligence, Data, Expertise, Architecture, and Strategy (IDEAS) -- that are transforming competition. Daugherty also highlights the three stages of human-machine interactions.


Low-code development becoming business skill ‘table stakes’

Cloud computing software provider ServiceNow said that more than 80% of its customer base now uses its low-code solution, App Engine. And App Engine’s active developer base grows by 47% every month, the company said. Marcus Torres, general manager of the App Engine Business at ServiceNow, said the ability to create business applications with low-code and no-code tools is becoming an expected skill set for businesses. Much of that is because the business side of the house understands the application needs of a company better than an IT shop. The millennials and younger workers that make up the majority of today’s workforce are far more comfortable with technology, including software development, than older workers. “They understand there is an app that provides some utility for them,” Torres said. “With these [low-code] platforms, people typically try it out, get some initial success, and then try to do more.” Torres has seen groups ranging from facilities teams to human resources departments develop applications, with the development work done by people who typically don’t have technology pedigree.


Why tech professionals are leaving IT companies for MBA

The IT experience with the business training provides a big picture of the direction of the tech firm, from the view point of clients, various departments, cost, and the firm’s future. The right kind of MBA program allows hands on experience of creating products and services and working in an environment similar to tech firms. Besides the soft skills like leadership, team work, communication, etc. the hard skills – problem solving, strategic planning, data analytics – working within the frame work of the fast-evolving tech world can really increase the hiring value of MBAs with prior tech experience. Good MBA programs also expose their graduates to various hubs including tech companies. It opens up networking opportunities with peers and current leaders who are all invested in building the right kind of talent for the future. This surely beats being stuck in a dead-end software job role with little learning and development. Good MBA programs also increase the value of its grads, with better salary opportunities than with pre-MBA experience. 



Quote for the day:

"A leader or a man of action in a crisis almost always acts subconsciously and then thinks of the reasons for his action." -- Jawaharlal Nehru

Daily Tech Digest - July 11, 2022

What Do Authentication & Authorization Mean In Zero Trust?

Authorization depends on authentication. It makes no sense to authorize a user if you do not have any mechanism in place to make sure the person or service is exactly what, or who, they say they are. Most organizations have some mechanism in place to handle authentication, and many have role-based access controls (RBAC) that group users by role, and grant or deny access based on those roles. In a zero trust system, however, both authentication and authorization are much more granular. To return to the castle analogy we explored previously, before zero trust the network would be considered a castle, and inside the castle there would be many different types of assets. In most organizations, human users would be authenticated individually — have to prove not only that they belong to a particular role, but that they are exactly the person they say they are. Service users can often also be granularly authenticated. In a RBAC system, however, each user is granted or denied access on a group basis — all the human users in the “admin” category would get blanket access, for example.


As hiring freezes and layoffs hit, is the bubble about to burst for tech workers?

Until now, the tech industry has largely sailed through the economic turbulence that has impacted other industries. Remote working and an urgency to put everything on the cloud or in an app – significantly accelerated by the pandemic – has created fierce demand for those who can create, migrate, and secure software. However, tech leaders are bracing for tough times ahead. According to recent data by CW Jobs, 85% of IT decision makers expect their organization to be impacted by the cost of doing business – including hiring freezes (21%) and pay freezes (20%). We're already seeing this play out, with Tesla, Uber and Netflix amongst the big names to have announced hiring freezes or layoffs in recent weeks. Meanwhile, Microsoft, Coinbase and Meta have all put dampeners on recruiting. If tech workers are concerned about this ongoing tightening of belts, they aren't showing it: the same CW Jobs report found that tech professionals remain confident enough in the industry that 57% expect a pay rise in the next year. Hiring freezes and layoffs don't seem to have had much impact on worker mobility, either: just 24% of professionals surveyed by CW Jobs say they plan to stay in their current role for the next 12 months. 


ERP Modernization: How Devs Can Help Companies Innovate

Many of these ERP-based companies are facing pressure to update to more modern, cloud-based versions of their ERP platforms. But they must run a gauntlet to modernize their legacy applications. In a sense, companies that maintain these complex ERP-based systems find the environments are like “golden handcuffs.” They have become so complicated over time that they restrain IT departments’ innovation efforts, hindering their ability to create supply chain resiliency when it is most needed. To make matters more difficult, the current market is facing a global shortage of human resources required to get the job of digital transformation and application modernization done, including skilled ERP developers—especially those skilled in more antiquated languages like ABAP. Incoming developer talent is often trained in more contemporary languages like Java, Steampunk and Python. These graduates have their pick of opportunities and gravitate to companies that already work in these newer programming environments. ERP migrations can be hampered by complex, customized systems developed by high-priced, silo-skilled programmers. 


Believe it or not, metaverse land can be scarce after all

As we see, technological constraints and business logic dictate the fundamentals of digital realms and the activities these realms can host. The digital world may be endless, but the processing capabilities and memory on its backend servers are not. There is only so much digital space you can host and process without your server stack catching fire, and there is only so much creative leeway you can have within these ramifications while still keeping the business afloat. These frameworks create a system of coordinates informing the way its users and investors interpret value — and in the process, they create scarcity, too. While a lot of the valuation and scarcity mechanisms come from the intrinsic features of a specific metaverse as defined by its code, the real-world considerations have just as much, if not more, weight in that. And the metaverse proliferation will hardly change them or water the scarcity down. ... So, even if they are not too impressive, they will likely be hard to beat for most newer metaverse projects, which, again, takes a toll on the value of their land. By the same account, if you have one AAA metaverse and 10 projects with zero users, investors would go for the AAA one and its lands, as scarce as they may be.


Building Neural Networks With TensorFlow.NET

TensorFlow.NET is a library that provides a .NET Standard binding for TensorFlow. It allows .NET developers to design, train and implement machine learning algorithms, including neural networks. Tensorflow.NET also allows us to leverage various machine learning models and access the programming resources offered by TensorFlow. TensorFlow is an open-source framework developed by Google scientists and engineers for numerical computing. It is composed by a set of tools for designing, training and fine-tuning neural networks.TensorFlow's flexible architecture makes it possible to deploy calculations on one or more processors (CPUs) or graphics cards (GPUs) on a personal computer, server, without re-writing code. Keras is another open-source library for creating neural networks. It uses TensorFlow or Theano as a backend where operations are performed. Keras aims to simplify the use of these two frameworks, where algorithms are executed and results are returned to us. We will also use Keras in our example below.


4 examples of successful IT leadership

IT leaders are responsible for implementing technology and data infrastructure across an organization. This can include CIOs, CTOs, and increasingly, CDOs (Chief Data Officers). To do this effectively, IT teams need employee buy-in, illustrating clearly how new technology tools and project management can benefit the company’s mission and goals. To achieve the full support of the employee base, IT teams must explain the implementation process and expected timeline. While data platforms and cloud infrastructure are important, the table stakes are tools that allow for internal communication and collaboration. Many IT teams are leveraging business process management platforms (BPMs), which help enable better collaboration between remote and in-office teams, offering a shared view of projects. These platforms allow for greater visibility and communication across organizations while reducing meeting time and improving workflow efficiencies. Technology has the potential to increase productivity, provide greater visibility of projects for employees and managers, and automate tasks that are repetitive and time-consuming.


Why 5G is the heart of Industry 4.0

The Internet of Things (IoT) is an integral part of the connected economy. Many manufacturers are already using IoT solutions to track assets in their factories, consolidating their control rooms and increasing their analytics functionality through the installation of predictive maintenance systems. Of course, without the ability to connect these devices, Industry 4.0 will, naturally, languish. While low power wide area networks (LPWAN) are sufficient for some connected devices such as smart meters that only transmit very small quantities of data, in manufacturing the opposite is true of IoT deployment, where numerous data-intensive machines are often used within close proximity. This is why 5G connectivity is key to Industry 4.0. In a market reliant on data-intensive machine applications, such as manufacturing, the higher speeds and low latency of 5G is required for effective use of automatic robots, wearables and VR headsets, shaping the future of smart factories. And while some connected devices utilised 4G networks using unlicensed spectrum, 5G allow this to take place on an unprecedented scale. 


How to Handle Authorization in a Service Mesh

A service mesh addresses the challenges of service communication in a large-scale application. It adds an infrastructure layer that handles service discovery, load balancing and secure communication for the microservices. Commonly, a service mesh complements each microservice with an extra component — a proxy often referred to as a sidecar or data plane. The proxy intercepts all traffic from and to its accompanied service. It typically uses MutualTLS, an encrypted connection with client authentication, to communicate with other proxies in the service mesh. This way, all traffic between the services is encrypted and authenticated without updating the application. Only services that are part of the service mesh can participate in the communication, which is a security improvement. In addition, the service mesh management features allow you to configure the proxy and enforce policies such as allowing or denying particular connections, further improving security. To implement a Zero Trust architecture, you must consider several layers of security. The application should not blindly trust a request even when receiving it over the encrypted wire.


DevOps nirvana is still a distant goal for many, survey suggests

"Development teams, in general, have hardly any insight into how customers benefit from their work, and few are able to discuss these benefits with the business," the authors report. "Having such insights ready at hand would improve collaboration between IT and the business. The more customer value metrics a development team tracks, the more positive that team views their working relationship with the business. Without knowing whether the intended value for the customer is being achieved or not, development teams are effectively flying blind." The LeanIX authors calculate that 53% work on a team with a 'low level' of DevOps based on maturity factors. Still, nearly 60% said that they are flexible in adapting to changing customer needs and have CI/CD pipelines set up. At the same time, less than half of engineers build, ship, or own their code or work on teams based on team topologies, indicating a lack of DevOps maturity. Fewer than 20% of respondents said that their development team was able to choose its own tech stack; 44% said they are partly able to, and 38% they are not able to at all.


Survey Shows Increased Reliance on DORA Metrics

Overall, the survey revealed just under half of the respondents (47%) said their organization had a high level of DevOps maturity, defined as having adopted three or more DevOps working methods. Those working methods are: Being flexible to changes in customer needs; having implemented a CI/CD platform; all engineers build, ship and own their own code; teams are organized around topologies and each team is free to choose its own technology stack. Of course, each individual organization will determine for itself what level of DevOps depth is required. For example, not every organization would see the need for teams to be organized around topologies or be free to choose its own technology stack. In fact, Rose said the survey made it clear that larger enterprise IT organizations tended to have a lower overall level of DevOps maturity. One reason for that is many larger organizations are still employing legacy processes to build and deploy software, noted Rose. Most developers are also further along in terms of embracing continuous integration (CI) than IT operations teams are in adopting continuous delivery (CD), added Rose.



Quote for the day:

"It is not joy that makes us grateful. It is gratitude that makes us joyful." -- David Rast

Daily Tech Digest - July 10, 2022

Customer.io Email Data Breach Larger Than Just OpenSea

The company is not revealing how many emails are now at heightened risk of phishing attempts as a result of the "deliberate actions" of the former employee. Non-fungible token marketplace platform OpenSea partially divulged the incident late last month when it warned anyone who had ever shared an email address with it about the unauthorized transfer of contact information. Approximately 1.9 million users have made at least one transaction on the platform, shows data from blockchain market firm Dune Analytics. Customer.io did not identify the other affected companies to Information Security Media Group or specify the sectors in which they operate. The affected parties have been alerted, the company says. The incident underscores the continuing threat posed by insiders, who account for 20% of all security incidents, according to the most recent Verizon Data Breach Incident Report. The costs of insider breaches, whether caused by human error or bad actors, are going up, and the Ponemon Institute found a 47% increase over the past two years.


Making the DevOps Pipeline Transparent and Governable

When DevOps was an egg, it really was an approach that was radically different from the norm. And what I mean, obviously for people that remember it back then, it was the continuous... Had nothing to do with Agile. It was really about continuous delivery of software into the environment in small chunks, microservices coming up. It was delivering very specific pieces of code into the infrastructure, continuously, evaluating the impact of that release and then making adjustments and change in respect to the feedback that gave you. So the fail forward thing was very much an accepted behavior, what it didn't do at the time, and it sort of glossed over it a bit, was it did remove a lot of the compliance and regulatory type of mandatory things that people would use in the more traditional ways of developing and delivering code, but it was a fledging practice. And from that base form, it became a much, much bigger one. So really what that culturally meant was initially it was many, many small teams working in combination of a bigger outcome, whether it was stories in support of epics or whatever the response was.


SQL injection, XSS vulnerabilities continue to plague organizations

Critical and high findings were low in mobile apps, just over 7% for Android apps and close to 5% for iOS programs. Among the most common high and critical errors in mobile apps identified in the report were hard-coded credentials into apps. Using these credentials, attackers can gain access to sensitive information, the report explained. More than 75% of the errors found in APIs were in the low category. However, the report warns that low risk doesn’t equate to no risk. Threat actors don’t consider the severity of the findings before they exploit a vulnerability, it warned. Among the highest critical risks found in APIs were function-level controls missing (47.55%) and Log4Shell vulnerabilities (17.48%). Of all high and critical findings across companies, the report noted, 87% were found in organizations with fewer than 200 employees. The report identified several reasons for that, including cybersecurity being an afterthought in relatively small organizations; a dearth of bandwidth, security know-how, and staffing; a lack of security leadership and budget; and the speed of business overpowering the need of doing business securely.


Three golden rules for building the business case for Kubernetes

Cost, customer service and efficiency are the three typical considerations any business weighs up when it comes to making new investments. Will a new initiative reduce costs in the long run, and be worth the initial expense, is a question decision makers weigh up all the time. Kubernetes does this because it addresses the challenge that comes in managing the potentially thousands or tens of thousands of containers a large enterprise might have deployed. ... The second consideration is whether the investment will mitigate the risk of losing a customer. Is the ability to serve their needs improved as a result of the changes? Again, Kubernetes meets the criteria here. By taking a microservices approach to applications, it allows them and the underlying resources they need to be scaled up or down, based on the current needs of the organization. ... The third and final consideration is whether the new technology or initiative will improve the ways the business operates. What might it achieve that a business couldn’t do before? 


Infrastructure-as-Code Goes Low Code/No Code

The cross-disciplinary skills required by IaC — someone with security, operations and coding experience — is a niche, Thiruvengadam told The New Stack. The San Jose, Calif.-based DuploCloud targets that need with a low-code/no-code solution. “The general idea with Duplo cloud is that you can use infrastructure-as-code, but you just have to write a lot less lines of code,” he said. “A lot of people who don’t have all the three skill sets still can operate at the same scale and efficiency, using this technology — that’s fundamentally the core advantage.” Unlike some solutions, which rely on ready-made modules or libraries, Thiruvengadam said that DuploCloud uses a low code interface to put together the rules for its rules-based engine, which then runs through the rules to produce the output. The self-hosted single-tenant solution is deployed within the customer’s cloud account. Currently, it supports deployment on Amazon Web Services, Microsoft Azure and Google Cloud, and it can run on-premise as well.


The Compelling Implications of Using a Blockchain to Record and Verify Patent Assignments

Smart contracts could be used to put various types of conditions and obligations on a patent asset. For example, companies might incentivize their inventors to disclose more inventions by placing an obligation on all future owners of an asset to pay the inventors some percentage of future licensing, sales, settlements, or judgments involving to that asset (e.g., the inventors get 10% of the total value of such transactions). This would allow inventors of commercially-valuable patents to enjoy the financial benefits of their inventions in a fashion that is more equitable than, say, a one-time nominal payout upon filing or grant. Since patents can only be asserted when all owners agree to do so, such contracts would have to clearly separate ownership of a patent asset from an obligation of the owner to compensate a previous owner for the asset's future revenue. Another potential use of smart contracts would be for ownership of an issued patent to revert to its previous owner should the current owner fail to pay maintenance fees on time. 


Breaking down the crypto ecosystem

According to several Indian and global reports reduction in transaction costs is further expected to propel this market growth in the next few years. In line with global trends, increasing adoption of the digital currency by businesses coupled with talks of a government-backed digital currency in the country, is further anticipated to bolster the growth of the cryptocurrency market. In the present Web 2.0 environment establishing trust and creating social identities of the network participants has been an uphill task that the ecosystem is unable to overcome. And since almost all economic value is traded based on human relationships, it is a fundamental roadblock to innovation and growth in Web 2.0. However, the outburst of cryptocurrency and Blockchain has fuelled a rapid transition towards Web 3.0 where we have witnessed exponential growth especially in enablers like NFTs, which has made possible acquiring, storing, and distributing economic value among users. In fact, the introduction of SBTs (SoulBound Tokens) could be the final piece in the puzzle for the Web 3.0 ecosystem.


What is observability? A beginner's guide

For decades, businesses that control and depend on complex distributed systems have struggled to deal with problems whose symptoms are often buried in floods of irrelevant data or those that show high-level symptoms of underlying issues. The science of root cause analysis grew out of this problem, as did the current focus on observability. By focusing on the states of a system rather than on the state of the elements of the system, observability provides a better view of the system's functionality and ability to serve its mission. It also provides an optimum user and customer experience. Observability is proactive where necessary, meaning it includes techniques to add visibility to areas where it might be lacking. In addition, it is reactive in that it prioritizes existing critical data. Observability can also tie raw data back to more useful "state of IT" measures, such as key performance indicators (KPIs), which are effectively a summation of conditions to represent broad user experience and satisfaction.


10 trends shaping the chief data officer role

CDOs may need to rethink cybersecurity in response to the growth of the data sources and volume of data, said Christopher Scheefer, vice president of intelligent industry at Capgemini Americas. These new and nontraditional data streams require additional methods of securing and managing access to data. "The importance of cybersecurity in a pervasively connected world is a trend many CDOs cannot ignore due to the growing threats of IP infringement, regulatory risks and exposure to a potentially damaging event," Scheefer said. Rethinking and reimagining cybersecurity is no small feat. The level of complexity of integrating connected products and operations into the business presents an incredible amount of risk. Establishing proper governance, tools and working with cybersecurity leadership is critical. It is the CDO's job to ensure the business does not constrain itself, limiting external connections and services that could bring competitive advantage and paths to growth, Scheefer said.


Streamlining Unstructured Data Migration for M&A and Divestitures

It’s common to take all or most of the data from the original entity and dump it onto storage infrastructure at the new company. While this may seem like the simplest way to handle a data migration, it’s problematic for several reasons. First, it’s highly inefficient. You end up transferring lots of data that the new business may not actually need or records for which the mandatory retention period may have expired. A blind data dump from one business to another also increases the risk that you’ll run afoul of compliance or security requirements that apply to the new business entity but not the original one. For instance, the new business may be subject to GDPR data privacy mandates because of its location in Europe. But if you simply move data between businesses without knowing what’s in the data or which mandates it needs to meet, you’re unlikely to meet the requirements following the transfer. Last but not least, blindly moving and storing data deprives you of the ability to trace the origins of data after the fact. 



Quote for the day:

"Leaders must be good listeners. It's rule number one, and it's the most powerful thing they can do to build trusted relationships." -- Lee Ellis

Daily Tech Digest - July 09, 2022

Ray Kurzweil Wants to Upload Your Brain to the Cloud

Well, this can go one of two ways. Either this brain/cloud situation will be an incredibly beneficial superpower, or it could be just another farming device for data mining and ad sales. My take: If it’s a beneficial superpower then it won’t be given to the general public. Superpower for the rich. Farming device for the regular people. And thank you very much but I am farmed enough. My Hinge updates don’t need to be sent to my cerebellum. I can’t talk about taking a trip to Costa Rica without flights popping up on my phone. I’m grateful for the ways technology has touched my life but let me remind people about the Flo app. This is a period and fertility tracking app that settled with the FTC in May for selling its users’ personal health data without their knowledge. While there are definitely huge potential advances that could be made from brain/cloud merges, I can only think of social media companies that are designed to addict us, with at least one of these apps in the recent past tracking our eye movements to see what we liked so we could be coaxed to spend more time using it. It’s not all bad but I am not looking to plug in forever. And I don’t trust these companies to do good.


NIST’s pleasant post-quantum surprise

To understand the risk, we need to distinguish between the three cryptographic primitives that are used to protect your connection when browsing on the Internet: Symmetric encryption - With a symmetric cipher there is one key to encrypt and decrypt a message. They’re the workhorse of cryptography: they’re fast, well understood and luckily, as far as known, secure against quantum attacks. ... Symmetric encryption alone is not enough: which key do we use when visiting a website for the first time? We can’t just pick a random key and send it along in the clear, as then anyone surveilling that session would know that key as well. You’d think it’s impossible to communicate securely without ever having met, but there is some clever math to solve this. Key agreement - also called a key exchange, allows two parties that never met to agree on a shared key. Even if someone is snooping, they are not able to figure out the agreed key. Examples include Diffie–Hellman over elliptic curves, such as X25519. The key agreement prevents a passive observer from reading the contents of a session, but it doesn’t help defend against an attacker who sits in the middle and does two separate key agreements: one with you and one with the website you want to visit.


Buggy 'Log in With Google' API Implementation Opens Crypto Wallets to Account Takeover

The first bug involved the common feature found in mobile apps that allow users to log in using an external service, like Apple ID, Google, Facebook, or Twitter. In this case, the researchers examined the "log in with Google" option — and found that the authentication token mechanism could be manipulated to accept a rogue Google ID as being that of the legitimate user. The second bug allowed researchers to get around two-factor authentication. A PIN-reset mechanism was found to lack rate-limiting, allowing them to mount an automated attack to uncover the code sent to a user's mobile number or email. "This endpoint does not contain any sort of rate limiting, user blocking, or temporary account disabling functionality. Basically, we can now run the entire 999,999 PIN options and get the correct PIN within less than 1 minute," according to the researchers. Each security issue on its own provided limited abilities to the attacker, according to the report. "However, an attacker could chain these issues together to propagate a highly impactful attack, such as transferring the entire account balance to his wallet or private bank account."


How To Become A Self-Taught Blockchain Developer

The Blockchain developer must provide original solutions to complex issues, such as those involving high integrity and command and control. A complicated analysis, design, development, test, and debugging of computer software are also performed by the developer, particularly for particular product hardware or for technical service lines of companies. Develops carry out computer system selection, operating architecture integration, and program design. Finally, they use their understanding of one or more platforms and programming languages while operating on a variety of systems. There will undoubtedly be challenges for the Blockchain developer. For instance, the developer must fulfill the criteria of a Blockchain development project despite using old technology and its restrictions. A Blockchain developer needs specialized skills due to the difficulties in understanding the technological realities of developing decentralized cryptosystems, processes that are beyond the normal IT development skill-set. 


Machine learning begins to understand human gut

While human gut microbiome research has a long way to go before it can offer this kind of intervention, the approach developed by the team could help get there faster. Machine learning algorithms often are produced with a two step process: accumulate the training data, and then train the algorithm. But the feedback step added by Hero and Venturelli's team provides a template for rapidly improving future models. Hero's team initially trained the machine learning algorithm on an existing data set from the Venturelli lab. The team then used the algorithm to predict the evolution and metabolite profiles of new communities that Venturelli's team constructed and tested in the lab. While the model performed very well overall, some of the predictions identified weaknesses in the model performance, which Venturelli's team shored up with a second round of experiments, closing the feedback loop. "This new modeling approach, coupled with the speed at which we could test new communities in the Venturelli lab, could enable the design of useful microbial communities," said Ryan Clark, co-first author of the study, who was a postdoctoral researcher in Venturelli's lab when he ran the microbial experiments.


Jorge Stolfi: ‘Technologically, bitcoin and blockchain technology is garbage’

It is the only thing that blockchain could contribute: the absence of a central authority. But that only creates problems. Because to have a decentralized database you have to pay a very high price. You must ensure that all miners do “proof of work.” It takes longer, and it is not even secure because in the past there have been occasions where they have had to rewind several hours worth of blocks to remove a bad transaction, in 2010 and 2013. The conditions that made that possible are still there and that’s why blockchain technology is a fraud: it promises to do something that people already know how to do. ... It is the only digital system that does not follow customary money laundering laws. That’s why criminals use it. Once you have paid a ransom, there is no way for the victim to cancel the payment and get the money back, not even the government can do it easily. It is anonymous and when a hacker encrypts your data, they do not have to enter your system directly, where they would leave a trace. He has botnets, computers that he has already hacked, so tracking him down is difficult.   


How to Write Secure Source Code for Proprietary Software

Source code is at the mercy of developers and anyone else that has access to it. That means limiting access to your source code and establishing security guidelines for those with access is vital for increasing security. It's also important to realize that insider threat actors aren't always malicious. Often, insider threats come from mistakes or negligent actions taken by employees. ...  Outside threats come from outside of your development team. They may come from competitors that want to use the code to improve their own. Or, they can come from hackers who will attempt to sell your source code or pick it apart looking for vulnerabilities. The point is, whether a leak comes from inside or outside threats, it can have terrible consequences. Source code leaks can lead to additional attacks, exposing large amounts of sensitive data. Source code leaks can also lead to financial losses by giving competitors an advantage. And your customers will think twice before dealing with a developer that has exposed valuable customer data in the past.


How IoT and digital twins could help CIOs meet ESG pledges

This inevitably leads to accusations of greenwashing, where marketing departments hijack the ambitions of organisations before any serious, robust plan is in place. For CIOs tasked with bringing down emissions and adhering to targets, this can be a huge problem. A recent IBM CEO study finds that CEOs are coming under increasing pressure from stakeholders to act on sustainability. It cites “frustrations” with organisations’ “all talk and no action”. Culture is seen as a significant issue in hampering any attempts to co-ordinate carbon emission strategies. “If you want to avoid the trap of greenwashing, it needs to start with the CEO,” says Alicia Asín, CEO of Libelium, an IoT business based in Zaragoza, Spain. Asín, speaking on a panel at IoT World Congress, added that this creates a culture where the whole organisation needs to look at the design and sustainability credentials of every technology offering for every sustainable project. She used an example of a farm customer that is using IoT to reduce the amount of water in irrigation and to reduce the level of pesticides being used on their crops.


GitHub Copilot is the first real product based on large language models

The success of GitHub Copilot and Codex underline one important fact. When it comes to putting LLMs to real use, specialization beats generalization. When Copilot was first introduced in 2021, CNBC reported: “…back when OpenAI was first training [GPT-3], the start-up had no intention of teaching it how to help code, [OpenAI CTO Greg] Brockman said. It was meant more as a general purpose language model [emphasis mine] that could, for instance, generate articles, fix incorrect grammar and translate from one language into another.” But while GPT-3 has found mild success in various applications, Copilot and Codex have proven to be great hits in one specific area. Codex can’t write poetry or articles like GPT-3, but it has proven to be very useful for developers of different levels of expertise. Codex is also much smaller than GPT-3, which means it is more memory and compute efficient. And given that it has been trained for a specific task as opposed to the open-ended and ambiguous world of human language, it is less prone to the pitfalls that models like GPT-3 often fall into.


LockBit explained: How it has become the most popular ransomware

After obtaining initial access to networks, LockBit affiliates deploy various tools to expand their access to other systems. These tools involve credential dumpers like Mimikatz; privilege escalation tools like ProxyShell, tools used to disable security products and various processes such as GMER, PC Hunter and Process Hacker; network and port scanners to identify active directory domain controllers, remote execution tools like PsExec or Cobalt Strike for lateral movement. The activity also involves the use of obfuscated PowerShell and batch scripts and rogue scheduled tasks for persistence. Once deployed, the LockBit ransomware can also spread to other systems via SMB connections using collected credentials as well as by using Active Directory group policies. When executed, the ransomware will disable Windows volume shadow copies and will delete various system and security logs. The malware then collects system information such as hostname, domain information, local drive configuration, remote shares and mounted storage devices then will start encrypting all data on the local and remote devices it can access.



Quote for the day:

"If you want people to to think, give them intent, not instruction." -- David Marquet

Daily Tech Digest - July 07, 2022

Metaverse Standards Forum Makes Data Interoperable But Only For Big Tech

Interoperability is the driving force for the growth and adoption of the open metaverse. Hence, the Metaverse Standards Forum aims to analyze the interoperability necessary for running the metaverse. More than 30 companies took up their respective posts as founding members of the forum. Game developers, architects, and engineers are mere clicks away from building the next cutting-edge metaverse project with artificial intelligence and advanced hardware. Setting interoperability standards with consideration to available technology is crucial to the mass adoption of the metaverse. Similar to the Metaverse Standards Forum, some key players are missing from the Oasis Consortium, like Meta. And in the past, groups like this have become smaller and smaller once internal conflict inevitably arises. The Metaverse Standards Forum is led by the Khronos Group, a nonprofit consortium working on AR/VR, artificial intelligence, machine learning, and more. Khronos has already tried to set a standard for VR APIs with its similarly named VR Standards Initiative in 2016, which included companies like Google, Nvidia. Epic Games and Oculus, which is now part of Meta.


Identity Access Management Is Set for Exploding Growth, Big Changes — Report

As SaaS and cloud subscription services have proliferated in the space, smaller firms increasingly have found IAM within their reach, and this study says to expect this trend to snowball. Whereas the subscription model makes up 60% of the market now, in five years the researchers forecast it will make up 94% of all IAM spending. Meanwhile, other, broader IT trends such as the explosion in cloud computing, bring-your-own-device (BYOD) policies, mobile computing, Internet of Things (IoT), and more geographically dispersed workers are all spurring greater IAM services spending to solve an acute need for saner access control. "There are more devices and services to be managed than ever before, with different requirements for associated access privileges," according to Juniper's analysts. "With so much more to keep track of, as employees migrate through different roles in an organization, it becomes increasingly difficult to manage identity and access." According to Naresh Persaud, managing director in cyber-identity services for Deloitte Risk & Financial Advisory, the market has been especially jumpstarted in the last 12 to 18 months as organizations work to accommodate a broader range and larger scale of remote-work situations.


Working with Microsoft’s .NET Rules Engine

Getting started with the .NET Rule Engine is relatively simple. You will need to first consider how to separate rules from your application and then how to describe them in lambda expressions. There are options for building your own custom rules using public classes that can be referred to from a lambda expression, an approach that gets around the limitations associated with lambda expressions only being able to use methods from .NET’s system namespace. You can find a JSON schema for the rules in the project’s GitHub repository. It’s a comprehensive schema, but in practice, you’re likely to only need a relatively basic structure for your rules. Start by giving your rules workflow a name and then following it up with a nested list of rules. Each rule needs a name, an event that’s raised if it’s successful, an error message and type, and a rule expression that’s defined as a lambda expression. Your rule expression needs to be defined in terms of the inputs to the rules engine. Each input is an object, and the lambda function evaluates the various values associated with the input. 


10 Questions to Ask Yourself Before Starting Your Entrepreneurial Journey

Entrepreneurship is over-glorified and misrepresented on social media. In reality, it is about building a business that solves a problem for a consumer. It's not about driving nice cars or posting nice pictures on social media. In fact, real entrepreneurship looks quite contrary to what we see on social media. Do we require a certain level of luck, genetics and an environment around us to be an entrepreneur? Yes — somewhat, for sure. But also, anyone can solve problems anywhere in the world. That is true for both small problems and big problems. The choice comes in the decision to find people who have needs, wants and issues that you can offer a solution for. It is also a choice that each of us gets to make on how well we wish to solve that issue — how obsessed we are willing to become with that solution and how above and beyond we are willing to go with servicing the customers well. Beyond the business solution also comes the personal and emotional responsibility — shaping and growing ourselves to be able to handle and maneuver through constant stress and difficulties. 


Don’t let automation break change management

Where automation is essential and unavoidable, network teams need to make sure all the good they can do with automation is not done at the expense of or in conflict with one of the other pillars of enterprise IT: change management. They need to make sure automation is controlled by change management, and that they are keeping change management processes in step with their increasing reliance on automation. One aspect is to implement change management on the automation, including the scripts, config files, and playbooks, used to manage the network. The use of code management tools helps with this: check-out and check-in events help staff remember to follow other parts of proper process. Applying change management at this level means describing the intended modifications to the automation, testing them, planning deployment, having a fallback plan to the previous known-good code where that is applicable, and determining specific criteria by which to judge whether the change succeeded or needs to be rolled back.


Imagination is key to effective data loss prevention

SecOps teams are charged with protecting data on a network or endpoint in each of its forms: at rest, in use, and in motion. To be in the driver’s seat and create the appropriate rules or policies to protect data across these three classifications requires teams to understand their environment fully. This is why organizations should consider implementing a flexible, scalable XDR (extended detection and response) architecture that can seamlessly integrate with their current security tools and connect all the dots to eliminate security gaps. With native integrations and connections for security policy orchestration across data and users, endpoints and collaboration, clouds and infrastructure, an XDR architecture provides SecOps teams with maximum visibility and control. ... Knowing what to protect, even before establishing protection, is key. So much so that comprehensive data visibility is a critical tenet for any SecOps team. Achieving this enables security teams to have the flexibility to create data protection parameters tailored to their own specific needs, creating an environment where the only limit on what they can achieve is their imagination.


The importance of digital skills bootcamps to UK tech industry success

The success of digital skills bootcamps in helping to secure the UK tech industry’s future is heavily contingent on the level of involvement from businesses. At present, however, not enough organisations are devoting the time needed to upskill or reskill staff, with research conducted by MPA Group finding that over a third of companies – 35 per cent – only allow workers to devote less than two hours per week to training, research, and development. Although there may be a number of reasons for this, MPA Group’s research indicated that ‘a lack of budget’ was considered by businesses to be the largest barrier for workplaces allowing staff to spend time on development. Digital skills bootcamps are helping to solve this problem by enabling companies to take advantage of the considerable state investment in the initiative, meaning organisations are given more affordable access to industry-led training. What’s more, with bootcamps having already been trialled to great success in places like the West Midlands – where approximately 2,000 adults have been trained with essential tech skills over the past few years – firms have the opportunity to hire recent programme graduates who can help impart what they have learned onto their workers.


The Parity Problem: Ensuring Mobile Apps Are Secure Across Platforms

So to build a robust defense, mobile developers need to implement a multi-layered defense that is both ‘broad’ and ‘deep’. By broad, I'm talking about multiple security features from different protection categories, which complement each other, such as encryption + obfuscation. By ‘deep’, I mean that each security feature should have multiple methods of detection or protection. For example, a jailbreak-detection SDK that only performs its checks when the app launches won’t be very effective because attackers can easily bypass the protection. Or consider anti-debugging, which is an important runtime defense to prevent attackers from using debuggers to perform dynamic analysis – where they run the app in a controlled environment for purposes of understanding or modifying the app’s behavior. There are many different types of debuggers – some based on LLDB – for native code like C++ or objective C, others that inspect at the Java or Kotlin layer, and a lot more. Every debugger works a little bit differently in terms of how it attaches to and analyzes the app.


4 ways CIOs can create resilient organizations

As CIO, you need to make sure your technology investments enable change. After all, you might need to support an entirely remote employee population. You might need to offer new capabilities that attract top talent or quickly shut down business in a region wracked by geopolitical conflict. Organizations invest large sums in migrating to the cloud. One reason is the ability to grow with needs. But technology scale is no longer the primary benefit of the cloud. And scale is no longer a guarantee of resilience. Rather, focus your cloud and software-as-a-service (SaaS) investments on supporting rapid change. Multi-cloud strategy, containerization, agile DevSecOps development methodologies: All should be designed around elasticity that equips you to make quick wins or pivot to new business models. ... Data analytics can provide holistic views and predictive models that help CIOs and others understand emerging trends. Those insights support data-driven decision-making and ultimately, resilience. That’s because you no longer have to rely on gut feel to prepare for an otherwise unpredictable future. 


What happens when there’s not enough cloud?

Most companies struggle to find enough customers to buy their products. According to Selipsky in a Mad Money interview, cloud companies like AWS might have the opposite problem. “IT is going to move to the cloud. And it’s going to take a while. You’ve seen maybe only, call it 10% of IT today move. So it’s still day 1. It’s still early. … Most of it’s still yet to come.” Years ago I noted that the cloud will take time. Not because there’s limited demand, but precisely because even with enterprises on a full sprint to the cloud, there are trillions of dollars’ worth of IT to modernize. As MongoDB CMO Peder Ulander responded to McLaughlin, “If anything, the growing shortage of capacity is a watershed moment for AWS, Google Cloud, and Microsoft Azure.” (Disclosure: I work for MongoDB.) In a hot market, it’s standard for demand to outstrip supply. Ulander cites products as diverse as Teslas or Tickle Me Elmo toys. What’s interesting here is that we’re having the enterprise equivalent of a 1996 Tickle Me Elmo shortage. 



Quote for the day:

"Leaders know the importance of having someone in their lives who will unfailingly and fearlessly tell them the truth." -- Warren G. Bennis