Daily Tech Digest - October 18, 2021

Magnanimous machines: Why AI work should work for people and not the other way around

The consolidation of power amongst a Big Tech elite fused with state intelligence grows ever stronger. These entities can know everything about us, yet carefully hide their own clandestine obfuscated activities. The best defence to this asymmetry is radical, mandated, and cryptographically secure transparency. To illustrate, in commercial aircraft, we have two essential data recorders (‘black boxes’). One monitors the aircraft itself, and the other monitors cockpit chatter. Both recorders are necessary to understand why an incident has occurred. For the same reasons, we need a similar approach to humane technology. Transparency is the foundation upon which every other aspect of ethical technology rests. It is essential to understand a system, its functions, as well as attributes of the organisation, and not to forget, those who steer it. Through transparency, we can understand that the incentive structures within organisations are aligned towards producing honest good faith outcomes. We can understand what may have gone wrong and how to fix it in the future.


8 Keys to Failproof App Modernization

Typically, modernization initiatives are strategized or rolled out before major events or milestones like data center contracts and vendor contracts coming up for renewal, Software and hardware platforms going End of Service and Support Life, government-imposed deadlines to implement regulatory and compliance requirements, ageing workforce and the risk of shortage of skills. In all such scenarios, since the accumulated technical debt is so high, these become multi-year, multi-million modernization programs. Risks are equally higher in such large programs. And to optimize costs and minimize risks, the temptation sometimes is to somehow get these workloads to the target platform [containerize or rehost without really changing the underlying architecture]. This will result in more technical debt and will necessitate another modernization initiative in a few years and so it goes. The chances of success are much higher if the initiatives are incremental in nature and time-bound say 3-6 months. In fact, it is a recommended practice in agile development practices to pay down technical debt regularly every single sprint.


Three key issues to tackle before smart cities become a reality

Many smart systems require data to be validated and assimilated in real-time for it to be relevant. This poses a problem, in that it requires every citizen to agree to their data being collected and shared, which in turn requires trust. That means the collection of data and its use to influence critical decisions in smart cities, needs careful consideration. However, citizens often worry about being ‘tracked’ – a difficult perception to eradicate in a world where privacy and security are among the biggest challenges each of us faces. Overcoming it requires us to build a comprehensive data privacy and security strategy into any smart city development, with local governments then responsible for educating individuals and society on how their data will be stored, who has access, and how it can be used. Such strategies require careful consideration, as any mistakes that harm public trust could impact the success of smart cities. The NHS COVID-19 app is a good example of this – once people lacked trust in the application, it took only a matter of days for thousands of people to delete it. 


Engineering Digital Transformation for Continuous Improvement

Getting organizations to invest in improvements and embrace new ways of working is a challenge. They don’t just need the right technical solutions, they also need to address the organizational change management challenges that are creating resistance to new ways of working. Organizations frequently have champions that have ideas for improvement and are trying to influence change without a lot of success. These champions find that the harder they push for change, the more people resist. We can have all the best approaches in the world, but if we can’t figure out how to overcome this resistance, organizations will never adopt them and realize the benefits. While pushing for change is the natural approach, research by organizational change management experts, like Professor Jonah Bergerin his book “The Catalyst,” suggests this is the wrong approach. His research shows that the harder you push for change, the more people resist. Whenever they feel like they are trying to be influenced, their anti-persuasion radar kicks in and instead they start shooting down ideas and resisting the change being offered.


A transactional approach to power

As Battilana and Casciaro tell it, it’s not your personal or positional power that determines your effectiveness in any given situation. It is your ability to understand what resources the involved parties want and how the resources are distributed—that is, the balance of power. “We find this extremely compelling,” explains Casciaro, “because it brings power relationships—whether they are interpersonal, intergroup, interorganizational, or international—down to four simple factors.” Taking this a step further, the ability to shift the balance of power within a situation determines your success at exercising power. Battilana and Casciaro find there are several key strategies that support this ability to rebalance power. If you have resources the other party values, attraction is a key strategy. You try to increase the value of those resources for the other party. Personal and corporate brand-building are organized around this strategy. If the other party has too many paths to access your resources, consolidation is a key strategy. You try to eliminate or otherwise lessen the alternatives. Employees join unions to limit the alternatives of employers and increase their power.


Treasury Dept. to Crypto Companies: Comply with Sanctions

The announcement is the latest in a series of moves from the Biden administration to combat ransomware, following high-profile attacks this year that have disrupted the East Coast's fuel supply during the Colonial Pipeline incident; jeopardized the nation's meat supply by attacking JBS USA; and knocking some 1,500 downstream organizations offline by zeroing in on managed service provider, Kaseya, over the July Fourth holiday. Last month, the Treasury Department blacklisted Russia-based cryptocurrency exchange, Suex, for allegedly laundering tens of millions of dollars for ransomware operators, scammers and darknet markets. In its latest issuance, the department alleges that over 40% of Suex’s transaction history had been associated with illicit actors, involving the proceeds from at least eight ransomware variants. Similarly, this week, the White House National Security Council facilitated a 30-nation, two-day "counter-ransomware" event, which found senior officials strategizing on ways to improve network resiliency, addressing illicit cryptocurrency usage, and ways to heighten law enforcement collaboration and diplomacy. 


DevSecOps: 11 questions to ask about your security strategy now

Where does friction exist between security and business goals? The question is relatively self-explanatory: DevSecOps exists in part to remove friction and bottlenecks that have historically introduced risks rather than reduce them. The question also has a subtext: What are we doing about it? This friction often goes unaddressed because, well, it’s unaddressed – as in, people avoid pointing it out or talking about it, whether because of poor relationships, fear factors, cultural acceptance, or other reasons. Leaders need to take an active role here by showing their willingness to talk about it, without finger-pointing or other toxic behaviors. “Leaders should constantly be probing and trying to understand the friction points between the business and DevSecOps,” says Jerry Gamblin, director of security research at Kenna Security, now part of Cisco. “These often uncomfortable conversations will help you refocus your team’s goals on the company’s goals.” A willingness to have those uncomfortable conversations as a pathway to positive long-term change is a key characteristic of a healthy culture.


The importance of crisis management in the age of ransomware

How to prepare for ransomware attacks is an often-asked question. From my point of view, the best action is to go through the checklist of security controls that prevent hackers from taking control of your network. Organizations like Servadus offer a Ransomware Readiness Assessment which helps organizational leadership identify current risks to the corporation. Of course, having up-to-date incident response and business continuity plans are part of that assessment. Outside, the real value comes from remediating weak cybersecurity controls. Additionally, organizations implement a framework to shore security control implementation and sustainability. Many organizations try to maintain compliance and security controls but are vulnerable to attacks 3 to 6 months after validating security in channels in place. The long-term strategy is about validating sustainable security controls. The service framework also allows organizations to evaluate threats to the organization and vulnerabilities of the system software in use. 


The importance of staff diversity when it comes to information security

The information security team’s decision-making is diversified, which contributes to the organisation’s overall strength. The fact that each employee provides an unique perspective to the problem makes it simpler to recognise and address hidden vulnerabilities in the security operations, as well as to identify and correct the deficiencies of other employees, helping them to grow in their own areas of expertise. Consider the likelihood of a breach, as well as the “red team” that will be assigned to deal with the situation. SOC analyst looks for the logs, security engineer looks for the vulnerability and some other team members look for the defensive approach to defend against the vulnerability. As a result, in order to maintain a strong information security team, it is critical to have a diverse workforce. It facilitates task efficiency while encouraging alternative viewpoints. ... An organisation’s security cannot be handled by a single product, and in order to maintain and handle those security products, companies require a large number of employees. So, in order to work efficiently and effectively without being reliant on a single person or product, this should be common knowledge.


Is it right or productive to watch workers?

The recent rise in employee surveillance accelerated during the pandemic, largely because it had to, but the bottom line is that we are now more than ever accustomed to being watched. We accept the intrusion of cameras in novel spaces under the promise of increased safety; doorbell cameras spring to mind, but so too do webcams and smartphones; we accept data tracking to prove we’re “not a robot” on websites; we accept that our information, our clicks, and our preferences are observed and noted. We seem to be primed now to accept that companies have a reasonable expectation to protect their own safety, so to speak, by monitoring us. One recent survey by media researcher Clutch of 400 US workers found that only 22% of 18- to 34-year-old employees were concerned about their employers having access to their personal information and activity from their work computers. Meanwhile, in a pre-pandemic survey of US workers by US media group Axios from August 2019, 62% of respondents agreed that employers should be able to use technology to monitor employees.



Quote for the day:

"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor

Daily Tech Digest - October 17, 2021

Multi-User IP Address Detection

When an Internet user visits a website, the underlying TCP stack opens a number of connections in order to send and receive data from remote servers. Each connection is identified by a 4-tuple (source IP, source port, destination IP, destination port). Repeating requests from the same web client will likely be mapped to the same source port, so the number of distinct source ports can serve as a good indication of the number of distinct client applications. By counting the number of open source ports for a given IP address, you can estimate whether this address is shared by multiple users. User agents provide device-reported information about themselves such as browser and operating system versions. For multi-user IP detection, you can count the number of distinct user agents in requests from a given IP. To avoid overcounting web clients per device, you can exclude requests that are identified as triggered by bots and we only count requests from user agents that are used by web browsers. There are some tradeoffs to this approach: some users may use multiple web browsers and some other users may have exactly the same user agent. 


Critical infrastructure security dubbed 'abysmal' by researchers

"While nation-state actors have an abundance of tools, time, and resources, other threat actors primarily rely on the internet to select targets and identify their vulnerabilities," the team notes. "While most ICSs have some level of cybersecurity measures in place, human error is one of the leading reasons due to which threat actors are still able to compromise them time and again." Some of the most common issues allowing initial access cited in the report include weak or default credentials, outdated or unpatched software vulnerable to bug exploitation, credential leaks caused by third parties, shadow IT, and the leak of source code. After conducting web scans for vulnerable ICSs, the team says that "hundreds" of vulnerable endpoints were found. ... Software accessible with default manufacturer credentials allowed the team to access the water supply management platform. Attackers could have tampered with water supply calibration, stop water treatments, and manipulate the chemical composition of water supplies.


What is a USB security key, and how do you use it?

There are some potential drawbacks to using a hardware security key. First of all, you could lose it. While security keys provide a substantial increase in security, they also provide a substantial increase in responsibility. Losing a security key can result in a serious headache. Most major websites suggest that you set up backup 2FA methods when enrolling a USB security key, but there's always a small but real chance that you could permanently lose access to a specific account if you lose your key. Security-key makers suggest buying more than one key to avoid this situation, but that can quickly get expensive. Cost is another issue. A hardware security key is the only major 2FA method for which you have to spend money. You can get a basic U2F/WebAuthn security key standards for $15, but some websites and workplaces require specialized protocols for which compatible keys can cost up to $85 each. Finally, limited usability is also a factor. Not every site supports USB security keys. If you're hoping to use a security key on every site for which you have an account, you're guaranteed to come across at least a few that won't accept your security key.


Future-proofing the organization the ‘helix’ way

The leaders need a high level of domain expertise, obviously, but other skills as well. As capability managers, these leaders must excel at strategic workforce management, for example—not short-sighted resource attribution for the products at hand, but the strategic foresight and long-term perspective to understand what the workload will be today, tomorrow, three to five years from now. They need to understand what skills they don’t have in-house and must acquire or build. These leaders become supply-and-demand managers of competence. They must also be excellent—and rigid—portfolio managers who make their resource decisions in line with the overall transformation. The R&D organization, for example, cannot start research projects inside a product line whose products are classified as “quick return,” even if they have people idle. It’s a different mindset. In fact, R&D leaders don’t necessarily have to be the best technologists in order to be successful. They must be farsighted and able to anticipate trends—including technological trends—but ultimately what matters is their ability to build the department in a way that ensures it’s ready to carry the demands of the organization going forward.


Robots Will Replace Our Brains

Over the years, despite numerous fruitless attempts, no one has come close to achieving the recreation of this organ with all its intricate details; it is challenging to fathom such an invention in the scientific world at this point, considering the discoveries that surface every other day. As one research director mentions, we are very good at gathering data and developing algorithms to reason with that data. Nevertheless, that reasoning is only as sound as the data, one step removed from reality for the AI we have now. For instance, all science fiction movies conceive movies that depict only a mere thin line that separates human intelligence from artificial intelligence. ... A new superconducting switch is being constructed by researchers at the U.S. National Institute of Standards and Technology (NIST) and updated that will soon enable computers to analyze and make decisions just like humans do. The conclusive goal is to integrate this switch into everyday life; from transportation to medicine, this invention also contains an artificial synapse, processes electrical signals just like a biological synapse does, and converts it to an adequate output, just like the brain does.


Data Storage Strategies - Simplified to Facilitate Quick Retrieval of Data and Security

No matter what the reason for the downtime, it may be very costly. An efficient data strategy goes beyond just deciding where data will be kept on a server. When it comes to disaster recovery, hardware failure, or a human mistake, it must contain methods for backing up the data and ensuring that it is simple and fast to restore. Putting in place a disaster recovery plan, although it is a good start and guarantees that data and the related systems are available after a minimum of disruption is experienced. Cloud-based disaster recovery, as well as virtualization, are now required components of every disaster recovery strategy. They can work together to assure you that no customer will ever experience more downtime than they can afford at any given moment. By relying only on the cloud storage service, the company can outsource the storage issue. By using online data storage, the business can minimize the costs associated with internal resources. With this technology, the business does not need any internal resources or assistance to manage and keep their data; the Data warehousing consulting services provider takes care of everything. 


RISC-V: The Next Revolution in the Open Hardware Movement

You could always build your own proprietary software and be better than your competitors, but the world has changed. Now almost everyone is standing on the shoulders of giants. When you need an operating system kernel for a new project, you can use Linux directly. No need to recreate a kernel from scratch, and you can also modify it for your own purpose (or write your drivers). You’ll be certain to rely on a broadly tested product because you are just one of a million users doing the same. That would be exactly what relying on an open source CPU architecture could provide. No need to design things from scratch; you can innovate on top of the existing work and focus on what really matters to you, which is the value you are adding. At the end of the day, it means lowering the barriers to innovate. Obviously, not everyone is able to design an entire CPU from scratch, and that’s the point: You can bring only what you need or even just enjoy new capabilities provided by the community, exactly the same way you do with open source software, from the kernel to languages.


The Conundrum Of User Data Deletion From ML Models

As the name says, approximation deletion enables us to eliminate the majority of the implicit data associated with users from the model. They are ‘forgotten,’ but only in the sense that our models can be retrained at a more opportune time. Approximate deletion is particularly useful for rapidly removing sensitive information or unique features associated with a particular individual that could be used for identification in the future while deferring computationally intensive full model retraining to times of lower computational demand. Approximate deletion can even accomplish the exact deletion of a user’s implicit data from the trained model under certain assumptions. The deletion challenge has been tackled differently by researchers than by their counterparts in the field. Additionally, the researchers describe a novel approximate deletion technique for linear and logistic models that is feature-dimensionally linear and independent of training data. This is a significant improvement over conventional systems, which are superlinearly dependent on the extent at all times.


9 reasons why you’ll never become a Data Scientist

Have you ever invested an entire weekend in a geeky project? Have you ever spent your nights browsing GitHub while your friends were out to party? Have you ever said no to doing your favorite hobby because you’d rather code? If you could answer none of the above with yes, you’re not passionate enough. Data Science is about facing really hard problems and sticking at them until you found a solution. If you’re not passionate enough, you’ll shy away at the sight of the first difficulty. Think about what attracts you to becoming a Data Scientist. Is it the glamorous job title? Or is it the prospect of plowing through tons of data on the search for insights? If it is the latter, you’re heading in the right direction. ... Only crazy ideas are good ideas. And as a Data Scientist, you’ll need plenty of those. Not only will you need to be open to unexpected results — they occur a lot! But you’ll also have to develop solutions to really hard problems. This requires a level of extraordinary that you can’t accomplish with normal ideas. 


Why Don't Developers Write More Tests?

If deadlines are tight or the team leaders aren’t especially committed to testing, it is often one of the first things software developers are forced to skip. On the other hand, some developers just don’t think tests are worth their time. “They might think, ‘this is a very small feature, anyone can create a test for this, my time should be utilized in something more important.’” Mudit Singh of LambdaTest told me. ... In truth, there are some legitimate limitations to automated tests. Like many complicated matters in software development, the choice to test or not is about understanding the tradeoffs. “Writing automated tests can provide confidence that certain parts of your application work as expected,” Aidan Cunniff, the CEO of Optic told me, “but the tradeoff is that you’ve invested a lot of time ‘stabilizing’ and making ‘reliable’ that part of your system.” ... While tests might have made my new feature better and more maintainable, they were technically a waste of time for the business because the feature wasn’t really what we needed. We failed to invest enough time understanding the problem and making a plan before we started writing code.



Quote for the day:

"Leaders are readers, disciples want to be taught and everyone has gifts within that need to be coached to excellence." -- Wayde Goodall

Daily Tech Digest - October 15, 2021

9 common risk management failures and how to avoid them

Known for decades as the hub of technical innovation, Silicon Valley has evolved into a bastion of toxic "bro culture," according to Alla Valente, senior analyst at Forrester Research. She also cited other forms of toxic work culture when companies fail to mitigate risks that can alienate employees and customers. Facebook's lukewarm response to the Cambridge Analytica scandal, Valente argued, has significantly eroded its trustworthiness and market potential. Wells Fargo's executives turning a blind eye to the warning signs of the bank's predatory selling practices with their customers "was a strategic decision," Valente said. "It could have been fixed, but fixing culture is never easy." ... Efficiency and resiliency sit at opposite ends of the spectrum, Matlock said. Greater efficiency can lead to greater profits when things go well. The auto industry realized significant savings by creating a supply chain of thousands of third-party suppliers spread across multiple tiers. But during the pandemic, there were massive disruptions in supply chains that lacked resiliency. 


Mandating a Zero-Trust Approach for Software Supply Chains

SBOMs are a great first step towards supply-chain transparency, but there is more that needs to be done. As an analogy, many equate the SBOM to the ingredient labels on food. With that perspective, we can see parallels between our software supply chain and the food supply chain. Subsequently, the need for end-to-end provenance and resistance against tampering should be clear. For this reason, I am encouraged by Google’s proposed Supply-Chain Levels for Software Artifacts (SLSA) framework that moves us towards a common language that increases the transparency and integrity of our software supply chain. However, for some software that performs critical functions (e.g., security), food is an inadequate comparison. It may be more apt to compare this type of software to medicine. This analogy brings forth additional considerations. For example, the drug-facts label on medicines includes not just the ingredients, but also usage guidelines and contraindications (i.e., what to look for in case something goes wrong.) Furthermore, as we’ve all seen with the COVID-19 vaccine, medicines must undergo intensive review and testing before it is approved for use. 


Data Consistency Between Microservices

The root of the problem is querying data from other boundaries that will be immediately inconsistent the moment it’s returned, just as in my first example without a serializable transaction. If you’re making HTTP or gRPC calls to other services to retrieve data that you require to perform business logic, you’re dealing with inconsistent data. If you store a local cache copy that’s eventually consistent, you’re dealing with inconsistent data. Is having inconsistent data an issue? Go ask the business. If it is, then you need to get all relevant data within the same boundary that’s required. There are two pieces of data we ultimately needed. We required the Quantity on Hand from the warehouse. In reality in the distribution/warehouse domain, you don’t rely on the “Quantity on Hand”. When dealing with physical goods, the point of truth is actually what is actually in the warehouse, not what the software/database states. ... The system is eventually consistent with the real world. Because of this, Sales has the concept of Available to Promise (ATP) which is a business function for customer order promising based on what’s been ordered but not yet shipped, purchased but not yet received, etc.


5 ways CIOs are redefining teamwork for a hybrid world

Most CIOs face similar grand experiments as hybrid work environments are becoming permanent. They are evaluating which team structures have been successful remotely and are looking to replicate them, while balancing innovation, collaboration, mentorship, and culture transfer, which have traditionally been done in person. Some 30% of IT leaders surveyed by IDC say they prefer an “online-first” policy for collaboration, and practices that started during the pandemic will likely continue indefinitely. While many workers say they have been more productive working remotely, that doesn’t always equate to better teamwork. “We’ve squeezed a lot of innovation out of necessity, but some of that serendipitous innovation that occurs through creative collision has been less,” says Aaron De Smet, senior partner at McKinsey and Co., who spoke at the IDG Future of Work Summit in October. “Companies have started to get their heads around a hybrid workforce, but I don’t think they’ve cracked what hybrid interactions look like. More of the work people do is now part of a cross-functional team. It’s part of a collaborative effort … ” De Smet says. 


3 Signs You’re Ready For A Machine Learning Job When You’ve Come From Another Field

Your sense of direction is what lets you know where you are, or which way to go, even when lingering in unfamiliar territory. Religious people would often argue that a lack of direction is a result of not having a purpose, and to some degree, I agree. You shouldn’t have to wait to have a sense of purpose before you can be happy. Imagine how miserable life would be if that’s the case! When a person begins to question their sense of direction in regards to machine learning, it’s usually to do with a lack of appreciation for how far they’ve come. As you learn more, it’s harder to see the small increments you improve by and this may feel as though you are no longer learning — especially when you compare it to a time when you were learning something new every day. Given you meet the generic requirements of the machine learning role you want, then it’s time to apply for a job that challenges you differently from how you could if you were working alone. Start applying.


Google Opens Up Spanner Database With PostgreSQL Interface

The integration of PostgreSQL into Cloud Spanner is deep; it is not just some conversion overlay. At the database schema level, the PostgreSQL interface for Cloud Spanner supports native PostgresSQL data types and its data description language (DDL), which is a syntax for creating users, tables, and indexes for databases. The upshot is that if you write a schema for the PostgreSQL interface for Cloud Spanner is that it will port to and run on any real PostgreSQL database, which means customers are not trapped on the Google Cloud is they use this service in production and want to switch. But customers do have to be careful. Spanner functions, like table interleaving, have been added to the PostgreSQL layer because they are important features in Spanner. You can get stuck because of these. ... The PostgreSQL interface for Cloud Spanner compiles PostgresSQL queries down to Spanner’s native distributed query processing and storage primitives and does not just support the PostgreSQL wire protocol, which allows for clients and myriad third-party analytics tools to interact with the PostgreSQL database.


The pursuit of transformation: Opportunities and pitfalls

Some transformations fail when there is a lack of alignment between the company’s strategy and its employees, customers and partners. There is a famous fable of an ant trying its hardest to change its trajectory but not realising that it is sitting on an elephant that’s going in the opposite direction. No matter how hard the little ant tries, it will not reach its destination as long as the elephant is not in alignment. All organizations have a culture and an emotional ethos, which if left unaddressed can sabotage the move to change. When Satya Nadella took over Microsoft in 2014, he had to first restructure the company to eliminate destructive internal competition so that all departments could focus on a common services goal. The result is a two and a half fold growth in the stock price over 5 years. On the other hand, when GE decided to launch GE Digital as a transformation vehicle, it did not release the subsidiary from the obligation of quarterly revenue and profitability targets. In addition, the company had to continue to meet GE’s software needs across business units, thereby not having the bandwidth to focus on true innovation and transformation. 


How Machine Learning can be used with Blockchain Technology?

Machine learning algorithms have amazing capabilities of learning. These capabilities can be applied in the blockchain to make the chain smarter than before. This integration can be helpful in the improvement in the security of the distributed ledger of the blockchain. Also, the computation power of ML can be used in the reduction of time taken to find the golden nonce and also the ML can be used for making the data sharing routes better. Further, we can build many better models of machine learning using the decentralized data architecture feature of blockchain technology. Machine learning models can use the data stored in the blockchain network for making the prediction or for the analysis of data purposes. let’s take an example of any smart BT-based application where the data is collecting by different sources such as sensors, smart devices, IoT devices and the blockchain in this application works as an integral part of the application where on the data the machine learning model can be applied for real-time data analytics or predictions. 


The tech recruiter – an unsung hero

The idiom ‘your first impression is your last impression’ holds true for recruiters. They have one opportunity to deliver that perfect elevator pitch to the candidate – convince them why your company provides the best opportunity for them- in the time it takes to ride an elevator. Landing the right impression will determine the candidate’s unalterable opinion and employment decision. To understand this better, let’s take a quick look at the talent landscape today. With the digitalization mega trend sweeping across Tech Inc., organizations are scurrying to bolster their workforce across technology skill sets. Economic Times reported that Indian IT firms plan to hire over 150,000 freshers in FY22 and NASSCOM remarked that India’s five largest companies are likely to hire 96,000 employees this year. Although this will be a huge boost for the $150 Billion industry, the demand-supply technology talent gap is only widening. Today, it is the candidates who hold the power and have the pleasure of the last word as prolonged notice periods allow them time to hedge their bets with the four-five job offers they have on hand. And the more skilled they are, the more offers they juggle.


Better Scrum Through Essence

First an anecdote from Jeff Sutherland – ‘The VP of one of the biggest banks in the country [USA] said recently: “I have 300 product owners and only three were delivering. The other 297 were not delivering”. And, he said, “I checked on where the three that were delivering, where they got the right way of working. They went to your class. So, you need to tell me what you are doing differently.” I said, “What we are doing differently is using Ivar’s work with Essence to really clarify to people what is working, what is not working, what you need to do next to improve things.” By using Essence on many Scrum Master courses we (Jeff, I and others) have also observed that of the 21 components of the original Scrum Essentials, the average team implements 1/3 of them well, 1/3 of them poorly and 1/3 of them not at all. With that level and quality of implementation it is not surprising that we are not always seeing the full potential that Scrum offers. At the heart of getting better Scrum through Essence are the use of the Scrum Foundation, the Scrum Essentials and the Scrum Accelerator practices to play games, facilitate events and drive team improvements.



Quote for the day:

"Leaders know the importance of having someone in their lives who will unfailingly and fearlessly tell them the truth." -- Warren G. Bennis

Daily Tech Digest - October 15, 2021

You’ve migrated to the cloud, now what?

When thinking about cost governance, for example, in an on-premises infrastructure world, costs increase in increments when we purchase equipment, sign a vendor contract, or hire staff. These items are relatively easy to control because they require management approval and are usually subject to rigid oversight. In the cloud, however, an enterprise might have 500 virtual machines one minute and 5,000 a few minutes later when autoscaling functions engage to meet demand. Similar differences abound in security management and workload reliability. Technologies leaders with legacy thinking are faced with harsh trade-offs between control and the benefits of cloud. These benefits can include agility, scalability, lower cost, and innovation and require heavy reliance on automation rather than manual legacy processes. This means that the skillsets of an existing team may be not the same skillsets needed in the new cloud order. When writing a few lines of code supplants plugging in drives and running cable, team members often feel threatened. This can mean that success requires not only a different way of thinking but also a different style of leadership.


A new edge in global stability: What does space security entail for states?

Observers recently recentred the debate on a particular aspect of space security, namely anti-satellite (ASAT) technologies. The destruction of assets placed in outer space is high on the list of issues they identify as most pressing and requiring immediate action. As a result, some researchers and experts rolled out propositions to advance a transparent and cooperative approach, promoting the cessation of destructive operations in both outer space and launched from the ground. One approach was the development of ASAT Test Guidelines, first initiated in 2013 by a Group of Governmental Experts on Outer Space Transparency and Confidence-Building Measures. Another is through general calls to ban anti-satellite tests, to not only build a more comprehensive arms control regime for outer space and prevent the production of debris, but also reduce threats to space security and regulate destabilising force. Many space community members threw their support behind a letter urging the United Nations (UN) General Assembly to take up for consideration a kinetic anti-satellite (ASAT) Test Ban Treaty for maintaining safe access to Earth orbit and decreasing concerns about collisions and the proliferation of space debris.


From data to knowledge and AI via graphs: Technology to support a knowledge-based economy

Leveraging connections in data is a prominent way of getting value out of data. Graph is the best way of leveraging connections, and graph databases excel at this. Graph databases make expressing and querying connection easy and powerful. This is why graph databases are a good match in use cases that require leveraging connections in data: Anti-fraud, Recommendations, Customer 360 or Master Data Management. From operational applications to analytics, and from data integration to machine learning, graph gives you an edge. There is a difference between graph analytics and graph databases. Graph analytics can be performed on any back end, as they only require reading graph-shaped data. Graph databases are databases with the ability to fully support both read and write, utilizing a graph data model, API and query language. Graph databases have been around for a long time, but the attention they have been getting since 2017 is off the charts. AWS and Microsoft moving in the domain, with Neptune and Cosmos DB respectively, exposed graph databases to a wider audience.


Observability Is the New Kubernetes

So where will observability head in the next two to five years? Fong-Jones said the next step is to support developers in adding instrumentation to code, expressing a need to strike a balance between easy and out of the box and annotations and customizations per use case. Suereth said that the OpenTelemetry project is heading in the next five years toward being useful to app developers, where instrumentation can be particularly expensive. “Target devs to provide observability for operations instead of the opposite. That’s done through stability and protocols.” He said that right now observability right now, like with Prometheus, is much more focused on operations rather than developer languages. “I think we’re going to start to see applications providing observability as part of their own profile.” Suereth continued that the OpenTelemetry open source project has an objective to have an API with all the traces, logs and metrics with a single pull, but it’s still to be determined how much data should be attached to it.


Data Exploration, Understanding, and Visualization

Many scaling methods require knowledge of critical values within the feature distribution and can cause data leakage. For example, a min-max scaler should fit training data only rather than the entire data set. When the minimum or maximum is in the test set, you have reduced some data leakage into the prediction process. ... The one-dimensional frequency plot shown below each distribution provides understanding to the data. At first glance, this information looks redundant, but these directly address problems when representing data in histograms or as distributions. For example, when data is transformed into a histogram, the number of bins is specified. It is difficult to decipher any pattern with too many bins, and with too few bins, the data distribution is lost. Moreover, representing data as a distribution assumes the data is continuous. When data is not continuous, this may indicate an error in the data or an important detail about the feature. The one-dimensional frequency plots fill in the gaps where histograms fail.


DevSecOps: A Complete Guide

Both DevOps and DevSecOps use some degree of automation for simple tasks, freeing up time for developers to focus on more important aspects of the software. The concept of continuous processes applies to both practices, ensuring that the main objectives of development, operation, or security are met at each stage. This prevents bottlenecks in the pipeline and allows teams and technologies to work in unison. By working together, development, operational or security experts can write new applications and software updates in a timely fashion, monitor, log, and assess the codebase and security perimeter as well as roll out new and improved codebase with a central repository. The main difference between DevOps and DevSecOps is quite clear. The latter incorporates a renewed focus on security that was previously overlooked by other methodologies and frameworks. In the past, the speed at which a new application could be created and released was emphasized, only to be stuck in a frustrating silo as cybersecurity experts reviewed the code and pointed out security vulnerabilities.


Skilling employees at scale: Changing the corporate learning paradigm

Corporate skilling programs have been founded on frameworks and models from the world of academia. Even when we have moved to digital learning platforms, the core tenets of these programs tend to remain the same. There is a standard course with finite learning material, a uniformly structured progression to navigate the learning, and the exact same assessment tool to measure progress. This uniformity and standardization have been the only approach for organizations to skill their employees at scale. As a result, organizations made a trade-off; content-heavy learning solutions which focus on knowledge dissemination but offer no way to measure the benefit and are limited to vanity metrics have become the norm for training the workforce at large. On the other hand, one-on-one coaching programs that promise results are exclusive only to the top one or two percent of the workforce, usually reserved for high-performing or high-potential employees. This is because such programs have a clear, measurable, and direct impact on behavioral change and job performance.


The Ultimate SaaS Security Posture Management (SSPM) Checklist

The capability of governance across the whole SaaS estate is both nuanced and complicated. While the native security controls of SaaS apps are often robust, it falls on the responsibility of the organization to ensure that all configurations are properly set — from global settings, to every user role and privilege. It only takes one unknowing SaaS admin to change a setting or share the wrong report and confidential company data is exposed. The security team is burdened with knowing every app, user and configuration and ensuring they are all compliant with industry and company policy. Effective SSPM solutions come to answer these pains and provide full visibility into the company's SaaS security posture, checking for compliance with industry standards and company policy. Some solutions even offer the ability to remediate right from within the solution. As a result, an SSPM tool can significantly improve security-team efficiency and protect company data by automating the remediation of misconfigurations throughout the increasingly complex SaaS estate.


Why gamification is a great tool for employee engagement

Gamification is the beating heart of almost everything we touch in the digital world. With employees working remotely, this is the golden solution for employers. If applied in the right format, gaming can help create engagement in today's remote working environment, motivate personal growth, and encourage continuous improvement across an organization. ... In the connected workspace, gamification is essentially a method of providing simple goals and motivations that rely on digital rather than in-person engagement. At the same time, there is a tacit understanding among both game designer and "player" that when these goals are aligned in a way that benefits the organization, the rewards often impact more than the bottom line. Engaged employees are a valuable part of defined business goals, and studies show that non-engagement impacts the bottom line. At the same time, motivated employees are more likely to want to make the customer experience as satisfying as possible, especially if there is internal recognition of a job well done.


10 Cloud Deficiencies You Should Know

What happens if your cloud environment goes down due to challenges outside your control? If your answer is “Eek, I don’t want to think about that!” you’re not prepared enough. Disaster preparedness plans can include running your workload across multiple availability zones or regions, or even in a multicloud environment. Make sure you have stakeholders (and back-up stakeholders) assigned to any manual tasks, such as switching to backup instances or relaunching from a system restore point. Remember, don’t wait until you’re faced with a worst-case scenario to test your response. Set up drills and trial runs to make sure your ducks are quacking in a row. One thing you might not imagine the cloud being is … boring. Without cloud automation, there are a lot of manual and tedious tasks to complete, and if you have 100 VMs, they’ll require constant monitoring, configuration and management 100 times over. You’ll need to think about configuring VMs according to your business requirements, setting up virtual networks, adjusting for scale and even managing availability and performance. 



Quote for the day:

"Leaders begin with a different question than others. Replacing who can I blame with how am I responsible?" -- Orrin Woodward

Daily Tech Digest - October 13, 2021

Stop Using Microservices. Build Monoliths Instead.

Building out a microservices architecture takes longer than rolling the same features into a monolith. While an individual service is simple, a collection of services that interact is significantly more complex than a comparable monolith. Functions in a monolith can call any other public functions. But functions in a microservice are restricted to calling functions in the same microservice. This necessitates communication between services. Building APIs or a messaging system to facilitate this is non-trivial. Additionally, code duplication across microservices can’t be avoided. Where a monolith could define a module once and import it many times, a microservice is its own app — modules and libraries need to be defined in each. ... The luxury of assigning microservices to individual teams is reserved for large engineering departments. Although it’s one of the big touted benefits of the architecture, it’s only feasible when you have the engineering headcount to dedicate several engineers to each service. Reducing code scope for developers gives them the bandwidth to understand their code better and increases development speed.


DevOps at the Crossroads: The Future of Software Delivery

Even though DevOps culture is becoming mainstream, organizations are struggling with the increasing tool sprawl, complexity and costs. These teams are also dealing with a staggering (and growing) number of tools to help them get their work done. This has caused toil, with no single workflow and lack of visibility. At Clear Ventures, the problems hit close to home as 17 of the 21 companies we had funded had software development workflows that needed to be managed efficiently. We found that some of the companies simply did not have the expertise to build out a DevOps workflow themselves. On the other hand, other companies added expertise over time as they scaled up but that required them to completely redo their workflows resulting in a lot of wasted code and effort. We also noticed that the engineering managers struggled with software quality and did not know how to measure productivity in the new remote/hybrid working environment. In addition, developers were getting frustrated with the lack of ability to customize without a significant burden on themselves. 
A stateful architecture was invented to solve these problems, where database and cache are started in the same process as applications. There are several databases in the Java world that we can run in embedded mode. One of them is Apache Ignite. Apache Ignite supports full in-memory mode (providing high-performance computing) as well as native persistence. This architecture requires an intelligent load balancer. It needs to know about the partition distribution to redirect the request to the node where the requested data is actually located. If the request is redirected to the wrong node, the data will come over the network from other nodes. Apache Ignite supports data collocation, which guarantees to store information from different tables on the same node if they have the same affinity key. The affinity key is set on table creation. For example, the Users table (cache in Ignite terms) has the primary key userId, and the Orders table may have an affinity key - userId. 


Here’s Why You Should Consider Becoming a Data Analyst

Data analysts specialize in gathering raw data and being able to derive insights from it. They have the patience and curiosity to poke around large amounts of data until they find meaningful information from it — after which they clean and present their findings to stakeholders. Data analysts use many different tools to come up with answers. They use SQL, Python, and sometimes even Excel to quickly solve problems. The end goal of an analyst is to solve a business problem with the help of data. This means that they either need to have necessary domain knowledge, or work closely with someone who already has the required industry expertise. Data analysts are curious people by nature. If they see a sudden change in data trends (like a small spike in sales at the end of the month), they would go out of their way to identify if the same patterns can be observed throughout the year. They then try to piece this together with industry knowledge and marketing efforts, and provide the company with advice on how to cater to their audience.


Siloscape: The Dark Side of Kubernetes

Good IT behavior starts with the user. As someone who has witnessed the impacts of ransomware firsthand, I can attest to the importance of having good password hygiene. I recommend using unique, differentiated passwords for each user account, ensuring correct password (and data) encryption when static or in transit and keeping vulnerable and valuable data out of plaintext whenever possible. In the case of Kubernetes, you must ensure that you understand how to secure it from top to bottom. Kubernetes offers some of the most well-written and understandable documentation out there and includes an entire section on how to configure, manage and secure your cluster properly. Kubernetes can be an awesome way to level-up applications and services. Still, the importance of proper configuration of each Kubernetes cluster cannot be overstated. In addition to good hygiene, having a trusted data management platform in place is essential for making protection and recovery from ransomware like Siloscape less burdensome.


An Introduction to Hybrid Microservices

Put simply, a hybrid microservices architecture comprises a mix of the two different architectural approaches. It comprises some components that adhere to the microservices architectural style and some other components that follow the monolithic architectural style. A hybrid microservices architecture is usually comprised of a collection of scalable, platform-agnostic components. It should take advantage of open-source tools, technologies, and resources and adopt a business-first approach with several reusable components. Hybrid microservices architectures are well-suited for cloud-native, containerized applications. A hybrid microservices-based application is a conglomeration of monolithic and microservices architectures – one in which some parts of the application are built as a microservice and the remaining parts continue to remain as a monolith.  ... When using microservices architecture in your application the usual approach is to refactor the application and then implement the microservices architecture in the application.


The Inevitability of Multi-Cloud-Native Apps

Consistently delivering rapid software iteration across a global footprint forces DevOps organizations to grapple with an entirely new set of technical challenges: Leveraging containerized applications and microservices architectures in production across multiple Kubernetes clusters running in multiple geographies. Customers want an on-demand experience. This third phase is what we call multi-cloud-native, and it was pioneered by hyperscale IaaS players like Google, AWS, Azure and Tencent. The reality is, of course, that hyperscalers aren’t the only ones who have figured out how to deliver multi-cloud-native apps. Webscale innovators like Doordash, Uber, Twitter and Netflix have done it, too. To get there, they had to make and share their multi-cloud-native apps across every geography where their customers live. And, in turn, to make that happen they had to tackle a new set of challenges: Develop new tools and techniques like geographically distributed, planet-scale databases and analytics engines, application architectures that run apps on the backend close to the consumer in a multi-cloud-native way. 


DeepMind is developing one algorithm to rule them all

The key thesis is that algorithms possess fundamentally different qualities when compared to deep learning methods — something Blundell and Veličković elaborated upon in their introduction of NAR. This suggests that if deep learning methods were better able to mimic algorithms, then generalization of the sort seen with algorithms would become possible with deep learning. Like all well-grounded research, NAR has a pedigree that goes back to the roots of the fields it touches upon, and branches out to collaborations with other researchers. Unlike much pie-in-the-sky research, NAR has some early results and applications to show. We recently sat down to discuss the first principles and foundations of NAR with Veličković and Blundell, to be joined as well by MILA researcher Andreea Deac, who expanded on specifics, applications, and future directions. Areas of interest include the processing of graph-shaped data and pathfinding.


Microservices Transformed DevOps — Why Security Is Next

Microservices break that same application into tens or hundreds of small individual pieces of software that address discrete functions and work together via separate APIs. A microservices-based approach enables teams to update those individual pieces of software separately, without having to touch each part of the application. Development teams can move much more quickly and software updates can happen much more frequently because releases are smaller. This shift in the way applications are built and updated has created a second movement/change: how software teams function and work. In this modern environment, software teams are responsible for smaller pieces of code that address a function within the app. For example, let’s say a pizza company has one team (Team 1) solely focused on the software around ordering and another (Team 2) on the tracking feature of a customer’s delivery. If there is an update to the ordering function, it shouldn’t affect the work that Team 2 is doing. A microservices-based architecture is not only changing how software is created


Transitioning from Monolith to Microservices

While there are many goals for a microservice architecture, the key wins are flexibility, delivery speed, and resiliency. After establishing your baseline for the delta between code commit and production deployment completion, measure the same process for a microservice. Similarly, establish a baseline for “business uptime” and compare it to that of your post-microservice implementation. “Business uptime” is the uptime required by necessary components in your architecture as it relates to your primary business goals. With a monolith, you deploy all of your components together, so a fault in one component could affect your entire monolithic application. As you transition to microservices, the pieces that remain in the monolith should be minimally affected, if at all, by the microservice components that you’re creating. ... Suppose you’ve abstracted your book ratings into a microservice. In that case, your business can still function—and would be minimally impacted if the book ratings service goes down—since what your customers primarily want to do is buy books.



Quote for the day:

"The essence of leadership is not giving things or even providing visions. It is offering oneself and one's spirit." -- Lee Bolman & Terence Deal

Daily Tech Digest - October 12, 2021

Proving the value of analytics on the edge

Las Vegas began deploying edge computing technology in 2018 while working on smart traffic solutions. A key driver for analyzing data at the network edge came from working with autonomous vehicle companies that needed near real-time data, Sherwood says. “Edge computing allowed for data to be analyzed and provided to the recipient in a manner which provided the best in speed,” Sherwood says. Visualizing data in a real-time format “allows for decision-makers to make more informed decisions.” The addition of predictive analytics and artificial intelligence (AI) is helping with decisions that are improving traffic flows, “and in the near future will have dramatic impacts on reducing traffic congestion and improving transit times and outcomes,” Sherwood says. To help bolster its data analytics operations overall and at the edge, the city government is developing a data analytics group as an offshoot of the IT department. The Office of Data and Analytics will drive how data is governed and used within the organization, Sherwood says. “We see lots of opportunities with many new technologies coming onto the market,” he says.


The Fundamentals of Testing with Persistence Layers

In order to learn how to test with databases, one must first ‘unlearn’ a few things starting with the concept of unit tests and integration tests. To put it bluntly, the modern definitions of these terms are so far removed from their original meanings that they are no longer useful for conversation. So, for the remainder of this article, we aren’t going to use either of them. The fundamental goal of testing is to produce information. A test should tell you something about the thing being tested you may not have known before. The more information you get the better. So, we are going to ignore anyone who says, “A test should only have one assertion” and replace it with, “A test should have as many assertions as needed to prove a fact”. The next problematic expression we need to deal with is, “All tests should be isolated”. This is often misunderstood to mean each test should be full of mocks so the function you’re testing is segregated from its dependencies. This is nonsense, as that function won’t be segregated from its dependencies in production.


Should We Resign Ourselves To The Great Resignation?

Is the Great Resignation a temporary trend or a long-term structural change? There’s no way to know but my money is on the latter. Life-changing events change lives, whether or not we realize it as it is occurring. An individual crisis changes individual behavior, worldwide crises cause lasting social and cultural consequences. The pandemic completely upended the employee experience, and while many employers continued to monitor productivity, most didn’t devote nearly the same amount of effort to soliciting real-time, real-world feedback from remote workers about the challenges, struggles and stresses they were facing. McKinsey identified “employees prioritize relational factors, whereas employers focus on transactional ones”. By neglecting to engage with remote employees, not listening to nor addressing their issues and concerns, employers missed a once-in-a-lifetime opportunity to build trust in within the organization and loyalty from workers. As the Great Resignation plays out and the workforce reshuffles, it will be interesting to see if employers and workers can engage, listen, and trust each other enough to find common ground.


How cyberattacks are changing according to new Microsoft Digital Defense Report

Ransomware offers a low-investment, high-profit business model that’s irresistible to criminals. What began with single-PC attacks now includes crippling network-wide attacks using multiple extortion methods to target both your data and reputation, all enabled by human intelligence. Through this combination of real-time intelligence and broader criminal tactics, ransomware operators have driven their profits to unprecedented levels. This human-operated ransomware, also known as “big game ransomware,” involves criminals hunting for large targets that will provide a substantial payday through syndicates and affiliates. Ransomware is becoming a modular system like any other big business, including ransomware as a service (RaaS). With RaaS there isn’t a single individual behind a ransomware attack; rather, there are multiple groups. For example, one threat actor may develop and deploy malware that gives one attacker access to a certain category of victims; whereas, a different actor may merely deploy malware.


Cybersecurity awareness month: Fight the phish!

Simply put, the phishing “game” only has two moves: the scammers always play first, trying to trick you, and you always get to play second, after they’ve sent out their fake message. There’s little or no time limit for your move; you can ask for as much help as you like; you’ve probably got years of experience playing this game already; the crooks often make really silly mistakes that are easy to sp …and if you aren’t sure, you can simply ignore the message that the crooks just sent, which means you win anyway! How hard can it be to beat the criminals every time? Of course, as with many things in life, the moment you take it for granted that you will win every time is often the very same moment that you stop being careful, and that’s when accidents happen. Don’t forget that phishing scammers get to try over and over again. They can use email attachments one day, dodgy web links the next, rogue SMSes the day after that, and if none of those work, they can send you fraudulent messages on a social network: The crooks can try threatening you with closing your account, warning you of an invoice you need to pay, flattering you with false praise, offering you a new job, or announcing that you’ve won a fake prize.


Edge computing: The architecture of the future

As technology extends deeper into every aspect of business, the tip of the spear is often some device at the outer edge of the network, whether a connected industrial controller, a soil moisture sensor, a smartphone, or a security cam. This ballooning internet of things is already collecting petabytes of data, some of it processed for analysis and some of it immediately actionable. So an architectural problem arises: You don’t want to connect all those devices and stream all that data directly to some centralized cloud or company data center. The latency and data transfer costs are too high. That’s where edge computing comes in. It provides the “intermediating infrastructure and critical services between core datacenters and intelligent endpoints,” as the research firm IDC puts it. In other words, edge computing provides a vital layer of compute and storage physically close to IoT endpoints, so that control devices can respond with low latency – and edge analytics processing can reduce the amount of data that needs to be transferred to the core.


Test Automation for Software Development

Automating software and security testing in software development is an ongoing process, yet truly reaching full automation may never happen. In SmartBear Software’s “2021 State of Software Quality | Testing” the percentage of organizations that conduct all tests manually rose from 5% in 2019 to 11% in 2021. This does not mean that automation is not happening. On the contrary, both manual and automated tests are being conducted. The biggest challenge to test automation is no longer dealing with changing functionality but instead not having enough time to create and conduct tests. Testers are not being challenged by demands to deploy more frequently but instead to test more frequently across more environments. Testing of the user interface layer is more common, and to address this 50% currently conduct some automated usability testing as compared to just 34% in 2019. The remainder of the article provides additional highlights on this and two other reports that highlight DevSecOps metrics and practices.


API Design Principles and Process at Slack

Slack’s list of design principles begins with each API doing one thing well and the developer experience. The first is that APIs should focus on a specific use case, thus becoming more straightforward, safer and easier to scale. The authors believe that APIs should be so well designed and documented that developers should be able to build a simple use case in a matter of minutes and discover parts of the API intuitively. In case of errors, the API should return all the information necessary for developers to understand the cause of the error and take the first steps towards solving it. The fifth principle concerns scale and performance. The authors provide concrete advice, recommending pagination of big collections, avoiding nesting big collections inside other big collections, and implementing rate limiting on the API. The last principle enumerated by the authors is that breaking changes should be avoided. 


How to Build a Strong and Effective Data Retention Policy

The first step toward creating a comprehensive DRP strategy is to identify the specific business needs the retention policy must address. The next step should be reviewing the compliance regulations that are applicable to the entire organization. “Designate a team of individuals across various business practices to begin data inventorying and devising a plan to implement and maintain a data retention policy that meets your business requirements while adhering to compliance regulations,” Gandhi advises. The enterprise's chief data officer (CDO) should oversee the DRP's design and implementation, Ferreira recommends. “However, everyone who deals with the data must be aware of the mechanisms implemented ... so that they can behave in ways that facilitate the implementation of the DRP,” he adds. “Implementing a robust DRP may be a top-down decision, but it requires buy-in from all levels of the organization.” Stakeholders from records, legal, IT, security, privacy, and other relevant posts and departments all need a chance to weigh in on an enterprise's data retention policy, Read says.


FSU’s university-wide resiliency program focuses on doing the basics better

In addition to its far-reaching geographical footprint, FSU has a broad range of operational needs to support the diversity of work typical of a university. It also has distributed IT. All those factors make for additional levels of complexity within disaster recovery and business continuity plans. Furthermore, at the time of the audit, the university had 307 different units expected to devise their own disaster and recovery plans as well as complete an annual 140-question risk assessment. Hunkapiller sought to overcome those complexities by using a multipronged approach to first tackle the inadequacies in the university’s business continuity, disaster preparedness and response capabilities and then encourage continuous improvement. “The idea was to better identify risks, improve our vulnerability management and resiliency plans, ensure continuity of operations and bring risk down to a level that was tolerable,” says Hunkapiller, who worked with FSU’s Department of Emergency Management to devise Seminole Secure.



Quote for the day:

"So much of what we call management consists in making it difficult for people to work." -- Peter Drucker