The Value of Design

In the projects I’ve worked on, about half of them skipped any sort of design phase. This typically lead to unmanageable code a few months into the project with no discernible way to backtrack or quickly change architecture. While each project could benefit from their own postmortem on their design phase (or lack thereof), I’m going to focus on the commonalities between projects that had some form of design phase and projects that did not. Feel free to use this as a guide for your future projects.

The Good

A design phase can be beneficial for any project. It provides a moment to think about the solution and how the architecture for it can be developed modularly with reusability and maintainability in mind. Do not underestimate this phase! This is where deep thought occurs in how the system should be developed and how the system is intended to be used. APIs are designed during this phase which will determine how the system interacts with itself and other services. A good design at this phase results in cheaper development and cheaper maintenance.

While design is important, there is a diminishing return on investment in design. The more time spent on designing a system without implementation, the less valuable it becomes. Development teams should be cognizant of their time spent designing and, after a high level design, begin designing the first thing to implement. Iterative designing alongside developing results in a flexible work plan and a flexible architecture or API design. The idea with a lightweight and iterative design process is that future design improvements build on top of or extend the existing design. Any future work that requires a re-write highlights the lack of understanding of the original requirements or a design that is inflexible. A good barometer of when a design has “enough” value is when the software engineers understand the system they are about to begin developing.

In addition to understanding a system before implementing it, design provides a blueprint for enabling test-driven development. Designed APIs may have tests written against the expected behavior of the implemented logic before any logic is actually implemented. This type of testing leads to clean and clear requirements alongside understanding of how the system should operate after implementation.

The Bad

So, why don’t teams design? The perceived cost to design in terms of time (and we all know time equals money) may not generally seem to be all that valuable to project management. Why spend time thinking about the project when you can just jump right in and start making it? This perceived cost saving measure of cutting or severely reducing design time isn’t necessarily tangible to project managers. Project managers typically care about actions that move the needle forward. Design does not move the needle forward — it moves the needle faster.

Because design is cut from the process, a lot of time is spent re-implementing, re-working, or refactoring code. Developers often code themselves into a proverbial corner and find the system that they have build is not easily adaptable to a new feature that needs implementation. This new feature requires refactoring the existing code. This refactoring doesn’t have an opportunity for a design phase. And the implemented feature is later refactored again when some other new feature needs implementation. This is a vicious cycle that often becomes the status quo and the team’s productivity quickly plummets (not to mention morale).

Furthermore, with all of this rewriting and refactoring, the system and the team’s understanding of this system, are not guaranteed. The system is a hodgepodge of various hacks and quick fixes that it is effectively held together by “magic.” I’ve been on a project where this “magic” ended up preventing new features and we ended up tracing the logical flows. This lead us to discover that there was a nasty bug in logical flow that wasn’t expected and would not have otherwise been discovered. What was worse was that we couldn’t fix this bug without declaring technical bankruptcy and reworking the architecture to achieve the intended (and expected) results.

The Ugly

Let’s talk about the ugly truth of any code base, regardless of good, bad, or no design: technical debt. Ward Cunningham coined this term and it is an analogy to treating deficiencies in a software product as a loan that accrues interest. The longer a deficiency persists in a system, the more technical debt it accrues. A project can accumulate so much technical debt that forward progress is no longer possible and the project must declare technical bankruptcy. This bankruptcy results in either a failed project, or a rewrite of part or the whole system. Martin Fowler has a lovely article that describes this concept in wonderful detail.

Following this same analogy, consider design a down-payment on a project. Sure, you can certainly start working on a project without a design, but you will quickly start accruing technical debt and that debt will accrue faster. Do yourself a favor and work in a design whenever you need to refactor something and before you start working on the code! Coding without a plan is what typically gets a software team stuck in technical bankruptcy. Don’t repeat what got you there when you dig yourself out!

Wrap up

Knowing all of this, it’s easy to see the value in design. Building a system without a plan has large hidden costs. Refactoring without a plan compounds these costs. When you don’t have a design phase at all you are essentially earmarking money to burn down the road with more time re-developing parts of your system until you can’t move that needle at all.

Hiring Good Developers

Hiring Good Developers

Hiring software engineers is easy, hiring good software engineers is hard. Due to the nature of software engineering, there is no clear or objective way to measure the skills of an engineer. Companies try to determine skill based on questions and online skills assessments. The problem with this approach is that it doesn’t highlight quality skills of a good software engineer.

Online Skills Tests

Online skills assessments typically provide a question to a software engineer and a time limit to answer this question. Sites like HackerRank provide a great platform for these types of tests. The engineer is generally not allowed to look up information (on the honor system) and the question doesn’t typically relate to the work the developer will be doing if hired. This type of skills assessment is reminiscent of college exams and doesn’t typically allow the creative freedoms normal working environments grant.

Homework

I’ve been part of take home assignments and have developed hiring processes that include a take home assignment with an accompanying code review. The intention is to allow the developer to showcase their engineering talents then showcase soft skills (like group presentations) and their ability to accept feedback and explain their work. While this isn’t fool-proof and doesn’t provide a subjective solution to the hiring process, it does showcase talent. The code review is meant to also prevent an unskilled engineer from copying another solution. Unfortunately, this too has some drawbacks: the developers that spend the most time on a take home project have a better quality product.

Solution?

Limiting time on homework projects is more of a suggestion as it is not enforced and favors those who spend more time on the project. This isn’t possible for some who have a full time job and family to tend to. Skills assessment tests favor those who regularly practice assessment tests and isn’t necessarily a good measure of their skill set in the work environment. Perhaps a different solution is necessary. Perhaps a timed, “open book” live coding exercise is better. All input could be tracked through the web console to get an idea of the developer’s thought process behind their work and could be played back (maybe at 10x speed) to watch it all unfold. This type of format would relieve the candidate of time related stress while also allowing the creative process of the candidate to shine. Sure, there are some trade offs to this approach, but isn’t solving problems why we all got into this business? This seems to be the hardest to solve.

Building a Product Vision

Building a Product Vision

Developing software under deadlines is hard. When I start projects I often have a problem to solve in mind and that’s it. It takes some effort and genuine thinking to come up with a solution to that problem. Part of that solution is having a vision. Without that, how do you know which direction to go? Developing a solution without a vision is like attempting to navigate a cave without light. If you haven’t been caving (or spelunking) before, it’s pitch black in there without a light. So much so that you literally cannot see your hand in front of your face and you don’t know which direction you are facing. If you don’t know which direction you are facing when developing a solution, how do you know where you are going?

Vision

Each project I undertake a leadership role in, I seek as much information as possible from the client. Defining your project’s vision is 90% asking the right questions and 10% of thinking about the solution. If you’re asking the right questions the solution will appear as if it’s emerging from some magical mist like a unicorn in the early morning sunrise. The best resource I’ve found for defining your vision is Roman Pichler’s Product Vision Board.

Product vision board
Your best resource for defining your vision.

So what questions should you ask? That’s a great question! While it does largely rely on what field your project is in and your client’s preferred method of communication, there are a few questions almost all project leads should ask and they are all right on the product vision board:

  1. What is your purpose for creating the product?
  2. Which positive change should it bring about?
  3. Which market or market segment does the product address?
  4. Who are the target customers and users?
  5. What problem does the product solve?
  6. Which benefit does it provide?
  7. What product is it?
  8. What makes it stand out?
  9. Is it feasible to develop the product?
  10. How is the product going to benefit the company?
  11. What are the business goals?

Once you can answer these questions, some sort of vision of your solution should come to mind. You should start recognizing that unicorn. You’re also in a great place as you’ve validated your product and have a clear path forward. In addition to this, you could also answer a few more questions about your vision to get clear insight into the current market and what it would take to make your product profitable. This part is optional and definitely recommended if you plan to sell your product. These questions are included in the extended Product Vision Board.

  1. Who are your main competitors?
  2. What are their strengths and weaknesses?
  3. How can you monetize your product and generate revenues?
  4. What are the main cost factors to develop, market, sell, and service the product?
  5. How will you market and sell your product?
  6. Do the channels exist today?

I highly recommend answering these questions if you plan on marketing and subsequently selling your product as it will position you favorably when it comes time to sell your product.

Hiring a Software Consultant?

Hiring a Software Consultant?

Hiring a consultant for your business can be a little uncomfortable. You have a contract that protects your business, but what if the consultant is just… bad? There are a few tips and tricks for identifying a less than stellar software consultant and this article will cover those.

Low Balling

Whether your project requires temporary help or more long term, beware of consultants that bid low. There’s an axiom that states “you get what you pay for.” Some software consultants purposefully provide low estimates. These low estimates may seem like you are getting a bargain, but beware that those consultants make up for this low estimation through scope change fees and additional customization fees. These fees quickly add up to be well more than the original estimate. Be sure to get multiple quotes from software consultants before hiring and compare their experience with their rate. While the cheapest consultant may look like a great choice, the more expensive consultant will save you surprise billing and additional headaches in the end. Realistic cost estimates may seem expensive, but it’s a more accurate representation of what you will end up paying. You can also help protect yourself from purposeful low bids by building into your contracts some wiggle room for specifications on scope and requirements. You should also get a feel for your consultants on how flexible they are before hiring them.

Bait and Switch

Some consultant firms lure clients in by showing off their star performers. This helps justify a higher price and makes the firm more attractive. However, these firms increase their profit margins by bringing in junior developers to do the actual work. Sure, the seasoned engineer might be some part of the development process, but the time that engineer spends on your project is severely limited. You should meet the entire team that you are hiring. At the end of the day it is your product you are paying for. You should interview the team to get an understanding of their competence for defining requirements and developing solutions. Your contract should explicitly list the developers that shall be working on your product. Including a hefty penalty combined with listing explicit team members should dissuade the more devious firms from attempting this ploy. Teamwork makes the dream work!

Communication Breakdown

Good communication is necessary to complete a project. Great communication is necessary to complete a great project. When you hire a consultant, be mindful of lacking communication. No news is not good news with your consultant. You should be driving conversations and making decisions. An unseasoned consultant, especially when paid hourly, may find no incentive of coming up with a decision or ending a long, drawn out meeting. Your decisions should have a clear deadline of delivery or schedule with defined, time-bound milestones. Without a time based factor driving the schedule, an unqualified consultant has room to draw out the project bleeding your company of your cash.

What’s Yours is Yours

When you hire a consultant, you are opening up vulnerability in your character, your trust, and your company. The consultant that is working for you is creating something that should be making you more money than the investment in hiring the consultant to complete the project. Your contracts should protect that vulnerability. The intellectual property rights for products a consultant creates should remain the company’s. You should lay out any tools or processes to help protect that property. Some consultants may try and hold your product, equipment, servers, and accounts hostage. This can become especially problematic when you try and replace the consultant or even add more consultants to the team. Be sure to include in your contracts that the company retains the intellectual property rights for anything developed and^[[C any domains your register. Check with your contract author to determine additional safeguards against having your company held hostage by a bad consultant. Demand copies of any documentation, licenses, and credentials for a consultant as part of the contract.

Project Vampires

Nothing is worse than hiring a consultant that looks great on paper and has talked the talk only to find out they can’t walk the walk. This is probably the most common problem I hear from companies that hire bad consultants. A project vampire is typically someone who is either unfamiliar with a technology that are supposed to be an expert at or someone who cannot make a decision and stall the project while they “figure it out.” Both scenarios are bad news as every minute that ticks by “figuring it out” leads to higher billing. On the flip side, the company itself could be that vampire with not making direct decisions and communicating those to the consultants (communication is key!) as well as keeping those consultants accountable with deadlines and milestones. Decision by committee is rarely productive and as the project drags on, the bill will increase as the consultant is waiting to hear back or busy “figuring it out.”

Teamwork Makes the Dream Work

This is my axiom. As a consultant to your business, I act as a team member for your product. I communicate early and often on anything I don’t understand, I’m not familiar with, and and concerns I have over the technical direction of your project or existing infrastructure. I do respect your business and the decisions that go into operating it. After all, it is your business and I’m helping your achieve your goals. Communication is key to a successful project and I communicate… often. I also share ideas on technical direction and can step back if there is a technical direction already in place. I can work with existing team members (including other consultants) to hit your goals and deliver a quality product. My time is as valuable as yours and I don’t prefer to waste my time or your dime for endless meetings or over analyzing solutions. I do prefer to help your business succeed. I advocate for your business when necessary to ensure you retain what is yours and you don’t over pay on shoddy work or vampires.

Teamwork makes the dream work. If you are looking for a consultant for your project contact me below:

Eisenhower Matrix

Eisenhower Matrix

I’ve had my share of projects in the past. With each project comes a bit of unknown terrain, deadlines, known tasks, known risks, unknown tasks, unknown risks… the list goes on. It’s hard prioritizing everything as it comes in for a project and backlogging this prioritization becomes a huge burden. I’m sure your project backlog (or your at home to do list) is massive and any attempt to start tackling these things may become overwhelming. Fortunately, I’ve found a rather interesting tool to help: the Eisenhower Matrix!

The Eisenhower Matrix was created by Dwight D. Eisenhower, the 34th president of the United States. During his presidency he launched DARPA (the precursor to the internet) and NASA. He was the Supreme Commander of the Allied Forced in Europe during World War 2, and the first Supreme Commander of NATO. This guy was busy! He also had to make a lot of decisions quickly. This box was his tool do accomplish all of these things.

The concept is simple, for a given task, determine if it is urgent or not urgent then determine if it important or not important. Once you figure those out, place the task in the appropriate box. Wherever it lies, you either Do it, Plan it, Delegate it, or Eliminate it.

Consider something on your household to-do list: grocery shopping. It’s urgent if you are out of food, it’s pretty important unless you have some other means of feeding yourself, perhaps a garden or maybe you already have food. If you do have food, it may not be as urgent, but is still important. If you don’t need food right it may not be urgent or important. In either case, you must determine if you need to go now, can plan on going later, can delegate something else to handle your shopping (maybe Amazon pantry?), or if you have a garden that can sustain you, it might not be urgent or important at all and you can completely eliminate it from your to-do list.

Once you completely process your backlog in this manner, you should have (hopefully) eliminated a bit of it. Maybe that deck you want to build can be delegated to a contractor. That room you want organized planned for a day to finally get it done. And that oil change that’s overdue, you’re at the shop today getting it done. This process of backlog grooming can be repeated each week (or however often you want to do it.. be sure to put that task in the right box!) but the process should at least help you see the priority in the items in your backlog and help you groom it to a manageable state. Anything you want to add, make sure you put it in a box!

Strong Opinions, Weakly Held

I believe in three core values to any successful team and/or project: communication, collaboration, and transparency. Communication is a key aspect to successful teams because it keeps everyone involved. Communication drives ideas. Ideas are the seeds of change and communication gets them planted. Collaboration brings the seeds of ideas to growth. Teams that are not collaborative suffer from infighting and become unproductive and resentful of a project. Team members want to be part of a solution and collaboration is the vehicle everyone must ride in to reach success. Transparency is the last leg of a successful team. Transparency requires both communication and collaboration. Transparency requires each individual team member to know the difference between what they do know, and what they need to learn. Team members who are transparent in their skills ask a lot of questions. The answers to these questions are often helpful for other team members as well. Transparency is also about owning mistakes, addressing them, and learning from them. Every failure comes with an opportunity to learn. One never really fails if they seize the learning opportunities afforded by failure and grow from them. These three core values I hold are what I instill in my teams.

Recently, during an interview, I brought up these core values and followed up with a quote I feel expresses not only these three values, but my thoughts on being a team member: “Strong opinions, weakly held.” This can also be rephrased as “Strong opinions, loosely held” and they both mean the same thing. I bring strong opinions to a team backed by experience and learning through many failures. Learning from these failures strengthens particular opinions, but they still remain loosely held. These opinions are meant as a starting point for collaboration or as a learning opportunity for myself and any others who may not have experienced what brought about these opinions. These opinions are meant to inspire creative thought and collaboration, not as a rule of thumb or “set in stone” requirement. These opinions are loosely held.

The flexibility of a team is important to adapt to changing requirements, processes, deadlines, and outside obstacles. Rigidity is a project slayer. I may have strong opinions on a topic (say, using a REST API vs an unstructured one) but these opinions are meant as a conversation starter to discuss a solution to a relevant problem. This conversation solicits input from the members of the team. It provides a platform for other opinions and a better solution. Sure, that solution may be an unstructured API, and that’s okay. But, the point of bringing up strong opinions is to start that conversation, not lay down the law. If there wasn’t at least a conversation about API design (or any other implementation) in the first place, the team could move forward in a rather meandering manner. The project could take an intangible hit to be discovered later as it accumulates technical debt. Communication about a project direction reduces this debt and lets a project be more flexible during a time where flexibility comes easy.

In the interview, I failed to accurately describe “strong opinions, weakly held.” This article is me learning from that failure and really taking the time to think about that phrase and how it can be perceived by others. When I came across the phrase it resonated with me as it so succinctly underlined my core values of communication, collaboration, and transparency. To me, it’s a positive attribute to have. Using that particular phrase became a strong opinion of mine. Maybe in the future I won’t use this phrase without following up with exactly how it aligns with my core values and what I look for in a team. The only thing I know is that I don’t know everything and I am definitely open to learn. I have strong opinions for sure, but they are loosely held.

Version Control

Version Control

I’ve been working as a software engineer for over a decade. In my time I’ve worked on projects that had version control in place and projects that had no version control. While I believe all projects should use version control, I have come across some projects that don’t see the benefits or value. This article aims to highlight the benefits and value from using version control and the pitfalls of no version control.

What Is It?

First, what is version control? It’s essentially a library for your code with a specialized database tracking all changes to any file. This type of tracking provides insight into your code in terms of changes, thoughts about changes in the form of comments, and overall visibility into how your code changes over time. It also allows using your entire code base at any single change. This becomes helpful if a breaking change is introduced to your code, you can always roll back to a previous version. It also helps your developers identify the exact change that introduced a bug.

Why Use It?

As mentioned, it does provide some innate capabilities like rolling back to a previous version and viewing changes to code. It also handles complicated code merges in the event 2 people change the same code. This type of code merging makes things easier and faster over teams that do not utilize version control systems. Take an example I’ve experienced in the past during my early years as a software engineer:

The team was small (3 people) and the project was simple (a simple website). This was the days of FTP clients and deploying your website was accomplished by drag-and-drop to your web server. Simple. Easy. Clean. Right?

Well, with 3 developers we decided that the web server would be the stand in for the most current version of the website (after all, it’s what everyone on the internet was looking at). Things immediately became more complicated. If a developer was working on something, they first needed to copy the files from the web server to their local machine, make their changes, then copy the files back to the web server. Hopefully no other changes were made in the meantime. When changes were made (and the most definitely were) the developer would have to copy the files from the web server (again) back to their local machine (in a different folder), talk to the developer (or developers) that made changes to figure out what they changed, and manually merge changes in those files affected (timestamps definitely helped). When that was all done, they would have to the web server again for changes. Sometimes, more changes were present and the whole process of copying, talking, and merging needed to occur again. This cycle repeated itself until the timing of being ready to copy to the web server and no changes on the web server aligned. This could take a few hours.

We were naive of version control systems at the time. Once we discovered one (Subversion in this case) it made things infinitely easier. Developers would check out the main branch of the repository, make their changes, then check them in. Merging happened automatically most times unless 2 developers were working on the same code. In this case, the source control manager would present the conflicting changes in an easily readable visual manner and allow the developer to pick and choose what the final file would be. After this merge, the developer could test the changes before committing everything back to the repository. If a developer made changes before the commit was ready, the source control manager knew and the developer would update their code from the repository. Again, merges generally happened automatically at this point, but in the rare case a conflict would arise and the visualizer would present this conflict to the developer. This cycle rarely happened because the whole process was fast, easy, and efficient. When a deployment was ready, a tag was made in the repository and that specific tag was checked out on the web server. No file copies were made any more, no FTP clients were involved, and everyone knew exactly what was on the web server at any given time and if any of the files on the web server had changed.

Wrap Up

I find version control systems a necessity for a successful software development team both in terms of efficiency and cost. Less time working on frivolous things equals less money spent! If a team insists on not using a source control manager, maybe that team hasn’t yet experienced anything negative impacting their development efficiency. I use source control for all of my projects regardless of team size. It’s beneficial for a team of 1 just for the ease of code tracking and visibility into bug introduction. If you’re not using source control, I strongly urge you to adopt it!

Server-Side vs Client-Side Trust

Server-Side vs Client-Side Trust

I like to pretend I’m an avid gamer. I try to keep up with the latest gaming trends, well.. at least I try to. I have a few games I typically go to, and a few games I am excited to play once they release (and I wait until a sale, or until they’ve been out long enough that the price drops… I am an adult with responsibilities after all…). I’ve played some games that are great (like Diablo III) and some games that are great in concept but lacking in execution (like Tom Clancy’s: The Division). My go-to games are generally networked and have other players playing them either in a cooperative or adversarial capacity. There are some games, however, that draw more hackers to than others. Why is that? This article is an attempt to explain the exploitation practices of these so-called “hackers” and their drive behind their exploits.

First, before we talk about how “hacking” works, we should set up some basics of network based game play. There are many methods of accomplishing this type of play I’d like to discuss with the pros and cons of each

Peer-to-Peer

Peer-to-peer, sometimes referred to as P2P, is exactly as it sounds. One player is the physical host of a game and other players make connections to the host. In this type of networked gaming, the host has an advantage when it comes to latency (aka lag). The inputs from the host have a latency of 0 while all other players have a latency based on the connection speed of the host player.

The pros of this type of networking is the easy of use. It doesn’t require any specific setup or resources for a player to host a game. It also does not require the gaming company to establish and maintain any dedicated servers. Many console games typically use this type of setup as the network demand is low and the games are typically casual.

Dedicated Server

Dedicated servers are hosted game servers that are specifically build to host a specific game. In games that use this type of connection, all players connect to the dedicated server and all player’s latency is based on their individual connections to the hosting server. Typically, the dedicated server is geographically positioned close to an internet backbone. This type of server is generally used for more competitive play (like Overwatch, or Counter-Strike) and can be set up in a local LAN environment for offline play.

Cloud Hosted

Cloud hosted servers are rather new to the gaming industry. These types of servers are typically allocated on-demand for a particular game and are shut down after the game resolves. This reduces the overall cost of a company for having dedicated servers as the capacity for players expands and contracts with demand. Games that utilize this type of server generally have a match-making system that finds and groups players, allocates a new server, then loads the players on the server. As technology expands, this type of gaming server is likely to become more adopted for many games.

Building Trust

With any game comes some level of trust between players. For video games, this trust can be enforced by the server hosting the multiplayer game or by the game itself. Server-side trust is generally the most trustworthy: commands come in from each player’s game, are validated, then the game state is updated and sent to each player’s client. Client-side trust, however, is more untrustworthy as the server assumes the commands it receives from each client are true and no validation is performed.

Server Side Trust

In Server Side Trust, when a command is received from any player’s client, it is validated against the rules of the game and the game state. If a player’s client sends a fake command “Player A shot Player B on the other side of the map and did a million damage” this is logically checked against the game state (Is Player A in range of Player B?) and the rules of the game (Does the weapon Player A uses allow a million damage?). If any of the commands violate the game state or game rules, the command is either ignored or flagged as suspicious. If enough suspicious commands are flagged for a player, that player could be banned from the game as it indicates cheating.

Client Side Trust

In Client Side Trust, when a server receives a command from a player’s client, it is regarded as the truth. If, somehow, a fake command is sent from a client (say, “Player A shot Player B on the other side of the map and did a million damage”), the server trusts this command as a true and accurate command, updates the game state, then relays the state to each player’s client. The result to Player B would be that they just suddenly were killed by Player A. This is obviously a problem. Client side trust assumes the game client is secure and extra steps are necessary to ensure client is not modified and the messages sent to the server are the original. In transit communications can be protected through encryption, provided the encryption is one-way and messages cannot be intercepted before or after decryption (very unlikely).

Never Trust the Client

There’s a saying in software development: “Never trust the client.” This does not refer to a person, but rather a consumer of server-side processes. Be it a web application, game client, or anything else processes transactions with a central (or distributed) server. Client side trust is inherently insecure. Server side validation is always required when communicating with a third-party (in this case a client application). This validation is crucial to ensuring the integrity of the system as a whole. Once validation breaks down, which is most likely will in very creative ways, recovering from this becomes easier if the server is already validating incoming messages against the rules of the system. Assuming everything that comes into a server side system from a third-party client is an attempt at breaking the system is a heavy-handed approach, but will reap major benefits as the system grows. Having a validation system in place to thwart adversarial communications will always provide benefits for your trustworthy clients.

In The Wild

Now that we all understand the different types of network based systems, let’s take a look at real world applications, how they are built, and the effects of the architecture of these systems.

Tom Clancy’s The Division

This game is build with client side trust as a benefit. There are trade-offs for this, of course, but an unfortunate side effect is that the game client or the traffic the game client is sending to a server can be manipulated. This particular game does have a lot of computational complexities, including world state and physics. These complexities would need to reside on the server if server side trust were to be leveraged. This, in turn, becomes expensive. Having a server powerful enough to model the physics of enemies and the world (which has a large interactive part in game play) becomes almost cost prohibitive. Ubisoft’s approach to Tom Clancy’s: The Division was to enable client side trust from the beginning of development. This allowed the development team to quickly deliver a working (and beautiful!) game to their customers. As a side effect, the game is rampant with cheaters in the PVP areas where competition is high. In this case, the negative cost of cheaters in a PVP area affected a smaller base of their customers as the PVP area was opt-in. The positive benefits involve very complex processes run on each player’s platform (PC, console, etc.) and reduced the cost of server hosting for the game. As a partial list of computations the server would need to validate from each client:

  1. Enemy position, inventory, and state (fire, poison, bleed, etc)
  2. Player position, inventory, and state
  3. NPC position, inventory, and state
  4. Projectile information (bullet drop, grenade location, damage, etc)
  5. Objective information (for each player connected)
  6. A. WHOLE. LOT. MORE.

This becomes cumbersome is a vast game like Tom Clancy’s: The Division. It also would require a lot of changes to the game client if the game were to switch to server side trust. The server would have to maintain the entire game state with each message from each client. It would also eliminate the unique aspect of Tom Clancy’s: The Division where each player has their own world in it that has it’s own state. This allows a player to join another player’s game (in a completely different state) and help them in their quest line. This also enables a changing world where certain impact on a player’s world permanently changes the world somehow. World of Warcraft accomplished this in their many expansions, but the calculations for combat in World of Warcraft and the overall computations required are minimized and streamlined for server side trust.

World of Warcraft

Possibly the best example for server side trust is World of Warcraft. This game is very light on many aspects while still immersing a player into the world through rich lore. The game itself is broken into different servers with a maximum capacity for players. Each area within any given server is then broken up into smaller worlds. Each of those worlds are broken down further into smaller areas. This is called server sharding and helps balance the overall load of any sharded area based on population. This sharding is also why some of the characters you see in a major city disappear when they leave the city: they are migrated to a different server shard. It also explains when you enter an area after an event and that area is changed: you’ve moved to a typed shard based on your quest progress.

Aside from sharding, there isn’t very complex battle calculations. There is no bullet drop, there is no projectile pathing, there is no enemy inventory (enemy loot drops are calculated at the time of enemy death based on loot tables). The entire game has been developed for server side trust and some sacrifices were made to accomplish this. These sacrifices were made up for through rich storytelling and an immersive world.

Player Unknown’s Battlegrounds

Another popular and competitive game that uses client side trust. This particular game has accumulated many cheaters due to its competitive nature. Among the things that have client side trust within the game client are:

  1. Inventory
  2. Hit Detection
  3. Collision
  4. Ammo
  5. Health
  6. Momentum

Any one of these things could be replaced with a message indicating another value. If I sent a message saying I just picked up the best gun in the game, I’d have the best gun in the game conjured up out of thing air. I could also send a message indicating I hit a player anywhere on the map and that player would take damage. This game has recently adopted Easy Anti-Cheat as a measure to prevent tampering with the client side trust. This works by providing a middle layer between the application itself and any process interacting with it. It verifies the integrity of data before sending it to the server and flags any suspicious messages. It also monitors processes that would tamper with the game client and also flag these actions. Enough flags and Easy Anti Cheat notifies the game company who can later ban the player. This effectively moves server side trust to a client side layer while also not being part of the client itself. This type of middleware is currently a better solution for applications with client side trust than rewriting or introducing server side trust.

Wrap Up

Competitive games are generally the target of cheaters and many of them opt for client side trust for the low traffic latency and increased complexity of algorithms. The only solution for applications with client side trust that require a level of integrity are the middleware applications that monitor the application process and any interactions with the process. This is not entirely fault proof, but does offer a greater deal of protection for other players against cheaters. But, as with anything that gives someone an edge in a competition, if there’s a will, there’s a way.

Efficiency

Efficiency

I am a fan of efficient processes. When I see potential for process improvement, I find myself drawn to making it better. I’ve done this in a past company I worked for when they did not have a proper defined software development life-cycle. I developed a process within my own teams that I though was better. It certainly felt better than “could someone build this deliverable on their machine and email it to the delivery guy?” Nothing was repeatable. Nothing was automatic. Nothing was tested unless someone remembered to test it. And nothing was guaranteed to work. Sound scary? It was. My proof of concept that I would eventually pitch to the company at large revolved around automation. Automation at this company meant a few upgrades.

The company was on Subversion at the time and this new thing called Git was around that everyone else was using and everyone else found it was better. In one afternoon I copied one of the projects I was leading and converted it to Git while retaining all the history. It was easy to convert. It was certainly faster than I was expecting given how slow Subversion is. And it was simple. I find Occam’s Razor to be a great mediator when arguing with myself. The simplest solution is often the right one. Switching to Git opened up a lot of doors for faster development without the feared SVN Merge Conflict (which happened multiple times a day). Git seemed more efficient in the use of my team’s time. I was sold on Git. My team was sold on Git before I even installed it. How could I get the company to upgrade? I had to show them how awesome it was.

Next up was not having to ask my developers to build a deliverable. If there’s one thing I’ve learned about the inefficiencies of doing anything manual, human error is that thing. Human error exists and can never, ever, ever be mitigated without completely removing it. Jenkins to the rescue! I’ve set up Jenkins in past jobs before so setting it up for a proof-of-concept was not big deal. About 10 minutes later it was up and running on my local computer along side the Git server (I know, I know, bad form! This was before containers people!). Having Git tell Jenkins that a thing was changed was easy. Almost too easy. A few manual builds to work out kinks in the Jenkins build then a few configurations on my local Git server and viola! The Git server was talking to the Jenkins server and a tag triggered a Jenkins build which stored the build artifacts indefinitely. Now, when we wanted to show something, we just cut a tag! (I would later expand this to an automated deployment, but this was a proof-of-concept that I need so I could change a company’s process).

My teams use this process for a few weeks tweaking things as we came across it. We knew (yes by this time the entire team was on board with the new way since it saved so much time) that we only had one chance at this pitch. After a few weeks I ran it by another team local to my office. They wanted in before I had finished the first sentence before I could ask if their team wanted in as a guinea pig. They were on it within an hour.

Some times it is better to ask forgiveness than permission. In this case, I asked for a dedicated VM (again, before containers!) located in HQ for my project. I got one after about 2 weeks and started migrating Git and Jenkins to it. Once we were migrated everything was going great! We had buy-in from the entire local office and things seemed to be going well with this out-of-control proof-of-concept-turned-beta project. Interestingly enough, productivity had increased enough that all this non-project-specific work was never noticed. I’m not saying you should go rogue, but I definitely should have pitched this sooner than I did. By this time, I asked for a meeting with the CTO and other Lead Engineers. After we figured out a time we could all be on a Skype call (migrating to Slack is a story for another time) I showed them what we were working on. I showed them Git. I showed them Jenkins. I showed them the entire process from new repository, through the Git Flow branching pattern, to first automated build. Boy am I’m glad this was a Skype call. After the initial “you shouldn’t have done this without permission” speech came the “I’m glad you did though.” They definitely liked what they saw. In my defense (and I believe everyone else in my local office at this point) we had tried to get continuous integration up and running but were continuously shot down. I felt like the poor guy behind the cart in this picture:

If you find yourself slowed by processes, think about how you can improve them!

So, the IT team in HQ took over the project and I walked them through some of the setup for Git and Jenkins. They, of course, made improvements (LDAP authentication, separate VMs, etc.) when they installed Git and Jenkins.

So, after about 2 months of this entire process, the company had adopted Git and Jenkins for a better solution. Teams were starting to migrate to Git and learn the Git Flow branching pattern. Everything was looking up! It sparked a bit of an overhaul to other processes and happened to fall right in line with their CMMI Level 3 efforts (more on that in another post). Everything seemed right in this process — everything was much more simple.

I’ve learned a lot from this experience. For my own projects I have my Git repositories hosted on GitLab. It uses a build server I host on DigitalOcean. These builds are automatically deployed (within containers!) and freely available. Heck, this site has a GitLab repository with the configurations to repeat deployment. So does my Division Gear discord bot. If you ever find yourself repeatedly building the same things manually, it might be time to fix that.

Copyright Expiration is BACK!

Copyright Expiration is BACK!

Since 1998, when Disney (along with a group of other corporations) successfully convinced congress to pass an extension to the original copyright laws. The original laws state that any works created before January 1, 1978 were protected for 75 years. The change in 1998 extended that to 95 years. This is great for corporations like Disney who’s iconic character, Mickey Mouse, was first published in 1928 in Steamboat Willie. This averted releasing the character Mickey Mouse into the Public Domain in 2004. It is now scheduled to be moved to the Public Domain in 2024 and I’m sure we will see another fight similar to that in 1998 to extend copyright protection laws even further.

While these copyright protection laws protect anything that was published prior to January 1, 1978, everything published afterwards is protected for the lifetime of the creator plus 50 years. So, this article will be entering the Public Domain sometime after January 1, 2219 (hopefully later! Predicting your own death is a little morbid).

So we can rejoice that, as of this moment, anyone can publish Robert Frost’s Stopping by Woods on a Snowy Evening without fear of violating copyright laws as it is one of many works entering the Public Domain today!

Stopping by Woods on a Snowy Evening

By Robert Frost

Whose woods these are I think I know.   
His house is in the village though;   
He will not see me stopping here   
To watch his woods fill up with snow.   

My little horse must think it queer   
To stop without a farmhouse near   
Between the woods and frozen lake   
The darkest evening of the year.   

He gives his harness bells a shake   
To ask if there is some mistake.   
The only other sound’s the sweep   
Of easy wind and downy flake.   

The woods are lovely, dark and deep,   
But I have promises to keep,   
And miles to go before I sleep,   
And miles to go before I sleep.