Home » WebDev

Category: WebDev

Don’t build a website. Build a sales funnel!

This title caught my eye on the web. I have been thinking about it ever since.

It is a brilliant title because it immediately shifts your thinking about what you are building and for whom. 

When you are setting out to build a website before you answer the tough questions of what’s it for and who’s it for, you will be staring at a blank canvas, not knowing where to start, what should be at the bottom, and what are the items that you should have in the navigation. 

But if instead, you focus on building a sales funnel, everything suddenly snaps into focus. 

Sales funnels have a particular purpose: to convert an interested visitor into a customer. 

They accomplish this by guiding your visitor through a journey, from being interested, to being inspired, then making a purchase, and being a happy customer. 

And this journey could happen on your site, via email, or social media. 

Simply by reading the three paragraphs above, you have a much better idea of what your initial page should have on it: for sure, you need a way to capture that customer’s email so they can get on the journey. And you may not even need to have a top-navigation on that page! 

What if you build your website in a very purposeful way, where each page and each component of a page needs to have a business reason for being there? Would you do away will all the fluff? Will you focus on what your audience needs instead of what everyone else is doing?

If money were not an issue

What would you do if you had an unlimited budget?

Answering this question is a very useful exercise to see what you could get for your online presence if you had an unlimited budget.

To make it easier to digest, I will divide this exploration into a couple of categories.

 

Performance

Load balancers – for high traffic websites – make sure your customers don’t have to wait around for your pages to load.

Performance Optimization – when you need to shave off every millisecond – load speed affects conversions, making sense to have software that is as fast as possible. Performance optimization is very broad and includes items like code optimization, caching, content delivery networks, and load balancers.

Accelerated Mobile Pages – important if you care about SEO and traffic that Google sends your way. They provide a significant speed improvement, and with fast loading times, conversions also increase. This optimization works best for publishers and less for eCommerce sites.

 

Marketing and User Experience (UX)

Sales Funnels – use automatic email series to keep the conversation going with your prospects. And with conditional logic, you can tailor this conversation for each individual, so they don’t have to read through the material that is not relevant to them.

Chatbot and chat agent – leverage the power of AI to answer common questions for your visitors and customers. This bot, however, will not replace good support staff that can connect with the person on the other line. But it will offload some of the frequent questions.

Affiliate Program – selling is the most challenging process in a company, but it’s the only one that generates revenue. An affiliate program is a straightforward way to recruit a sales force that will work for you.

Correct Metadata – use the correct Metadata for your pages, making it easier to share content around social media and various content aggregators. Your content will not be noticed when placed next to someone doing a fantastic job with their meta tags if you ignore this.

Conversion tracking – if you do not set up goals and do not track how well they are doing, you will have no way of knowing what works. Every new decision will be a “wild guess” instead of an informed one. You can set up tracking using tools like Google Analytics, Facebook pixel, and in-house software.

Email Deliverability – you can be the best copywriter in the world. It will not help you if your users never get your email. Choose a good delivery service and configure things like DKIM and SPF correctly.

Advanced SEO – most modern publishing tools have built-in SEO helpers, but in some cases, more advanced tactics are needed to get that ranking you are looking for.

A/B Testing – it is best to make a decision based on your audience’s real data when possible. A/B testing helps you fine-tune your design and messaging for better conversions.

The user journey – how do people use your product or service? What can you improve that experience? The user journey experience can help you fine-tune the user experience, increasing both conversions and customer satisfaction.

 

Branding and Design

Intuitive Search – for content-heavy websites, like community forums, course libraries, and educational websites, a powerful search engine makes the difference between high engagement or content lost in inaccessible parts of your site. Most public websites rely on Google search to solve this issue for them, but what do you do if your content is private, behind a paywall? The tool to use here is Elastic Search.

Accessibility – make sure that people with disabilities can use your services. It’s not only a legal requirement in some countries, but it is also the right thing to do.

Companion App for iOS and Android – a companion app, if done right, can unlock new ways to interact with your customers and add value to them. Sometimes this is just a mobile site packed as an app. Still, a better experience is to take advantage of the many sensors on portable devices and create a unique and valuable experience.

Streamlined Checkout – don’t ask for a ton of information if all you need is an email address to deliver the digital products. There will be plenty of opportunities to collect other details later on. A streamlined checkout experience can significantly reduce the shopping cart abandonment rate.

Mobile Optimization – this term is a bit of a misnomer since you should think “mobile-first” and optimize for desktop later. But I am adding this here just in case it is not clear to you that more than half of traffic comes from mobile. Also, mobile does not mean only “small screens.” It means access to a camera, sensors, and information that you can use to create an experience that would not be possible on a desktop.

Style Guides – use style guides to ensure your look stays consistent across channels in interactions with your customers.

 

Insurance (backup and testing)

Testing – tests provide no direct benefit to either your customers or you, the website owner. Because of that, they are easily overlooked or done wrong. A broken or buggy process can cost you a fortune in lost revenue. Do your tests, and do them right. Follow this advice, and you don’t have to hope it will work; you know it will work.

  • Automated end-to-end tests – Don’t wait for a visitor to take the time to report a problem. Instead, have automated scripts that test the business-critical processes daily and immediately notify you when something breaks.
  • Stress tests – The fact that your home page feels snappy when you are the only one using it does not mean much. How will your infrastructure handle a spike in traffic? Especially in situations where you know a marketing promo will hit? Do you have auto-scale enabled? How far up should you scale? Stress tests can surface problems that only show up in high traffic situations, and they will also give you numbers you can work with when choosing the server specifications. High traffic can potentially mean lots of conversions. But it can also mean zero conversions if your server keeps crashing. That’s both revenue lost and marketing money wasted on a campaign that went nowhere.
  • Penetration tests – A good-looking snappy page is not necessarily secure. A security audit and penetration tests allow you to discover security holes and fix them before a bad actor abuses them.

GDPR compliance – obey the law. It’s cheaper than paying fines. And with a clear design, it does not have to look bad.

Resilient Design – with progressive enhancements – this is a way to future proof your web application by assuming that new technologies will emerge, so you plan to support those while not abandoning your old customer base.

Automated Backups – are the best insurance policy against data loss and security vulnerabilities. They need to be automated, so you don’t forget. And you need to test them to make sure they work.

 

Integration and Automation

Data import, export, and migration – having this in place helps you avoid lock-ins with a particular technology and provider. And it also opens up many integration possibilities with third-party tools. Being flexible makes you resilient.

Interoperability – how well do you play with others? Publishing clear and useful APIs can help increase the adoption of your service. It also increases the chance of your services being integrated and creating value in a way that you cannot foresee right now. Add artificial intelligence to the mix, and it can get exciting.

Automated email processing – for things like creating a support ticket for each email sent to a support address. Or use it for triggering processes in automated processes or for publishing content from your phone.

Support Ticket System – responding to support via email and not using a system is comfortable and easy but will hurt you in the long run. It will be impossible to track what was said to whom, and issues will fall through the cracks. Also, customers expect a premium brand to have a professional support system.

Single Sign-On – is a way to allow your users to log in once and then get access to all the relevant applications. When a visitor can signup with Google or Facebook, it reduces the friction of taking action, and it offloads the concern of storing a password to the identity provider. But be careful though, make sure your users can still log in even if they lose access to their email and that you own your audience, not the identity provider.

Administrative Dashboards – are dedicated applications or pages that will give you an overview of how your website performs. How are your metrics doing, and what are the outstanding issues.

RSS Feed – a free way to make your content discoverable and accessible for anyone interested. I believe this is an undervalued and underused feature. It allows your readers to stay in direct touch with you, and you don’t have to pay for a newsletter service or boost your posts. RSS is the readers’ equivalent for podcasts (and they use, in fact, the same technology).

Web Push – the ability to send web notifications to your users, even if they have closed your website. It is still new, and it still has impressive conversion rates. (At the moment, it only works on desktop browsers and Android). Don’t be spammy, though, as the users can block them with a click, and getting unblocked is not very easy.

Scheduled tasks – send daily reports, check website integrity, run maintenance tasks. Anything that you find yourself regularly doing should be programmed in as a scheduled task, especially backups. You can use cron jobs or an automation platform like Zapier for these.

Automatic content distribution – you don’t have to share your content on all the social media accounts manually. You can, and you should use tools that do this automatically.

 

Security

Security Audit – a security audit can help uncover problems that can only be discovered by looking at your code and software, and hardware architecture. This audit becomes crucial if you deal with very sensitive data, and a breach would cost you more than having regular security audits and penetration testing.

Although important, nobody likes tests!

I should not have to write this, but testing your web application is very important, especially if you care about your brand being perceived as premium. 

And by testing, I don’t mean “does my homepage load fine?”. I mean the comprehensive end-to-end testing and stress tests to ensure your app still works when that marketing campaign hits. 

Even though good tests are essential in the quality assurance process, I have seen websites and applications that do not fail gracefully, with a friendly error message that explains what happened and offers a way to move forward.

Many software workflows attempt to convince the developer to test first or make sure their code is testable, but most developers do not use them. 

I thought about it, and I believe I found the reasons. 

Nobody likes tests because:

  1.  they are boring to write
  2.  it is not easy to write code that is testable – you need a specific mindset
  3.  they offer zero visual feedback to the paying customer – so in that sense, it is invisible thankless work
  4.  they need to be maintained along with the code base that does something 

Tests are a tough sell to both developers and their clients. Most often than not, we proceed with the attitude: “we will fix it when someone complains!”

On this blog, I care a lot about value. And from that perspective, I will say this: no client will ever come to you and say, “I need a website that will require about 20,000 tests for a code coverage of 90%”. Tests have zero value to them. Instead, they need a solution to a real problem, like: 

  • They need to build a premium brand. 
  • They want to sleep well at night, having confidence that the vast majority of the app functionality works and will continue to work even under stress. 
  • They need actionable data to help them decide where to move next with their web application: what is the bottleneck in performance? What is hurting conversions? 

These are all items the client cares about, and a possible solution is to write tests. But what you are selling is peace of mind, not code coverage. 

And yes, in some cases, especially for MVPs, tests are not essential for the bottom line, and so even if you know they are important in the QA process, that may come later once the product proved to be a hit. 

As a developer, I would get into the practice of doing tests and writing testable code. It is an excellent skill to have when things change faster and faster, and interoperability creates more complex systems. 

And as a client, I would put some monetary value on my peace of mind and knowing the app won’t break and see what solutions I can buy for that budget. 

 

Spiritual Software Engineer

Improve your website performance by separating concerns

The problem

I have a slow WordPress site that will resist all optimization attempts.

What is the most common advice you get for speeding up a WordPress site? 

  • remove unused plugins
  • update all the software
  • use the latest version of PHP 7
  • install a caching plugin

This list is all good advice and things to reach for first, but what do you do when your WordPress install still takes 13 seconds to load a page, even after all the optimization is done?

In my case, the problem was that the website was trying to do too many things. And the optimizations above did not help much. 

Here is what I mean:

  • the website had multi-language support
  • contact forms done with Contact Form 7
  • subscribe popups using NinjaPopus
  • animated sliders on the homepage
  • hundreds of blog posts
  • a WooCommerce store 

Because of how WordPress works, all items were loaded, regardless of the page you were looking at. The multi-language setup was not working well with the caching system. And I could not uninstall any of the plugins because all of them were needed somewhere. WordPress does not do selective plugin loading.

It drove me crazy that I would need to wait 13 seconds to open up a blog post that would request hundreds of resources (CSS and JS) that it did not need. It was a page with one image and some text but a truckload of “invisible add-ons.” This page should load in milliseconds!

Some have suggested writing yet another plugin to remove the unnecessary scripts from the pages that don’t use them. I understand how that would improve the loading time, but on principle, I don’t want to have code that adds stuff, so then I can immediately remove it a few microseconds later. That’s just bad practice

I came up with the solution to split the site into two: one for the simple blog and one for the store. I also dropped multi-language support. 

The Pros

  1. The blog is made up of static pages – so you can deploy very effective and aggressive caching.
  2. I could also split the plugins – there was no need for the blog to load all the WooCommerce code.
  3. The store site could focus better on selling and keeping the buying experience smooth. 
  4. The improvement in performance was dramatic, as I could now optimize each part independently, without conflicts
  5. A bonus side effect is that I can now work on the blog and not worry that the store will be affected and vice-versa.

The Cons

  1. There are now two websites to maintain and think about.
  2. They need to look the same in design, so they feel part of a whole.
  3. The search function is now limited – it either returns post or products – depending on where you are using it.
  4. Tracking the user activity is more complicated.
  5. Adding multi-language support means adding a new site for each language – which does not make business sense right now.

There is an obvious trade-off here. There are more pieces to take care of, but you get to optimize each one individually and fine-tune them for their specific purpose. 

In Conclusion 

If the common performance tunning is not doing much for you, maybe the structure you have is too complex, and your website would benefit from being spit up into smaller but more effective pieces. Of course, this effort only makes sense if having fast loading pages is essential to your business. 

How to bring Life to a Large Content Library on Consciousness and Metaphysics

I am studying various membership offerings on websites related to consciousness, metaphysics, and related topics. 

What I have seen so far is what I call the “data dump!”

After you purchase your membership, you are presented with an overwhelming list of items you can study. Sometimes they are organized in various categories. Other times, they are not. 

This kind of library poses a few problems:

1. It just feels overwhelming. Where do you start? What should you look at next?

2. When the library is full of audio and video material, it is not searchable. And I don’t want to look at a two-hour video to realize that it was not the information I wanted. 

3. If this library is behind a paid membership, there is little incentive for users to keep their membership. The exception here is when new content is added, so the member hand around for that. But it still leaves a ton of old content dead in the water. 

We Can Do Better

I have some ideas on tackling these issues, but I confess I have not seen them implemented yet. 

1. Where do you start? 

The library should have a roadmap, with a clear START HERE sign. Everyone new will appreciate this: one button, instead of hundreds of items to choose from. Of course, this works if each item has a “Go here next” button. You are creating a pathway through your library, guiding your reader. 

To take this to the next level, the new members can take a quick quiz at the “START HERE” landmark, based on which they will get a different pathway that will better suit their interest. I think this makes the library much more valuable. 

A notable mention here is the content drip approach. I am not a big fan of this because I like to move at my own speed and jump around if I want to. That being said, even content-drip is better than no deliver strategy.

2. Making the video and audio search-able. 

Each video and audio should have a description with time indexes describing what is going on: topics addressed, questions answered, resources, etc. If you did not do this for each video or audio as you have created it, you are faced with a considerable task 5 years later. 

Soon, artificial intelligence will come to the rescue, but until then, you could hire someone, or more than one, to go through the videos and create these indexes for you. You can find people willing to help on Fiverr, but be ready to spend some money. For a paid membership, you should be able to recoup the expense quickly, and it will significantly increase the library’s value!

3. Reviving old content

A spiritual library never gets truly old. Usually, the information is timeless, and it can help new and old members alike. But new members are not likely to dig around in the past five years, especially when new content is being added each month or each week. 

A pathway through the library will help. Making the content searchable will also expose some gems. But you can take this much further with automatic semi-random content delivery

Here is what I mean: 

Each week send an automated email to your membership suggesting one of the library items and the notes associated with it and invite the members to study it. If you have to pick this by hand, it may be too much work; therefore, you should select one semi-randomly. Semi-randomly means that you will use a quiz or use historical data to determine your members’ interests. And you randomly choose items that they did not see yet but might be interested to see. 

Such a message will be highly relevant. Of course, it requires some creative technical solutions to segment your audience based on interests. Either your newsletter provider can do that, or a piece of code on your software could handle this. 

Imagine how much more valuable the old content suddenly becomes and how much better you serve your audience! 

4. And a bonus: create a community around the library.

It’s much more engaging to comment on something and have a discussion around an item with your peers. You can ask questions if you need more clarity, or you can be generous and help others understand or point them in the right direction. 

A community will take care of this. A basic comment feature under each library item is ok, but a forum is much better as it allows your members to create new topics that maybe you did not think of.

Can you think of more?

If you have other ideas on making a spiritual library more “alive”, I am very interested to know. Reach out!

Still not using Log Files in your app?

Have you ever had to contact support for a web app or a plugin to fix a problem, and the first thing they ask is for full access to your web server so they can “debug” the issue? 

This request frustrates me to no end. 

It is unprofessional, and it is lazy. 

The reason support asks for this is so they can run tests and inspect the results on your LIVE server. If that makes you nervous, it should! How can you know that they will not accidentally mess with your customers’ data? Not to mention all the privacy issues that crop up as soon as you hand your keys to a third party with no control. 

A proper way to deal with providing support for your app or your plugin is to add logs—a log file journals the activity and the data passing through your code. Inspecting a good log file will almost always let you know what the problem is and where the problem is. When a customer calls you for support, you only need to ask for the log files, not the keys to the server. 

In my experience, a good log file creates a breadcrumb trail that documents the data flow and the branching decisions in your code. Ideally, inspecting the log file alongside your code allows you to precisely follow along and determine what was wrong, without even having to run any code. 

A common mistake is to be unnecessarily verbose while at the same time not documenting the branching decisions. Silently discarded errors and exceptions are the usual pitfalls, and close second are if/else branches where only one of them leaves in a mark in the log. 

Security and Privacy

Now that you understand why log files are a must, especially in a client-server situation (like all the web applications), you need to be careful not to store sensitive data into the log file. Don’t store passwords or credit card numbers, and unless absolutely necessary, do not store emails. 

If sensitive data is required for you to be able to rebuild the data flow, make that available under a specific “log level” that is only activated on request. And in some cases, the entire log system can be activated only when trying to debug a problem. With this approach, however, you lose historical data that you need to fix the problem.

Always provide a way for an admin to flush the logs. 

Rolling Over

I am an overly enthusiastic user of log files. Simply because they work, and they speed up the process of solving problems. But there is a mistake that I kept doing for far too long. That mistake was no automatic rolling of the log files. What that meant is that the logs grew and grew until they would eat up all the allocated disk space. 

Oopsy! 

When using log files, decide when a log entry is too old and have an automated mechanism to remove those logs. Rolling the log files once a month (log1, log2, log3, etc.) and removing the very old files is a useful approach. 

If you don’t currently use log files, what is your strategy to support and debug your application while it is running on the customer’s LIVE server? I hope you will not say: “get root access and hack away until I find the bug” 🙂

Resilient Web Design

The things that stood the test of time had a solid clear foundation than we could later build or improve upon. They strived to make the least amount of assumptions. This approach allowed for future ideas that no one would be able to predict, to still connect with the solid base. 

Resilient Web Design strives to provide just that: a set of guiding principles to create web designs that will still work even if we don’t know how the screen of the future will look like. 

We used to have small desktop screens, then bigger ones, and now huge ones. Then all of a sudden, in came the mobile devices, then tablet devices. 

We are clueless

You have to embrace the idea that you have no clue what kind of screen size will be used to consume your content. 

And this is what Resilient Web Design is all about. 

You no longer design for a specific target screen size and “(kind of) fix it” for the rest. 

Your first media query is no media query. Your HTML content is responsive by default, with no need for CSS. This responsiveness has always been there, but the power of this feature is only now becoming evident. 

We have been designing the web “backward” making arbitrary assumptions about the browser’s viewport size. This way of design is the bias we have been dragging around from the design on paper, and it is so ingrained that we don’t see it. The new reality is that you can no longer know ahead of time the width and the height of the canvas used to present your content. 

That can be scary, but also freeing. And as a creative, it is a problem that can be solved by design and that, I imagine, should excite you!

For a long time, I thought that because browsers are so forgiving with the HTML and CSS syntax, they encouraged poor code and allowed both developers and designs not to use standards. I wanted the browser to cry foul and reject any page that errors in the code. This behavior would force the creators of that page to fix it! And “get it right!” 

I see now that browsers being so forgiving allowed for massive innovation and flexibility. By being lax with errors, old browsers would not choke on new features being added later on in the future. This approach has allowed the web to grow as innovative and fast as we see today. And perhaps we should adopt a similar mindset in all the systems we design. Like Poste’s Law:

“Be conservative in what you send; be liberal in what you accept.”

Converting your content into a web app, which requires Javascript, may also not be a good idea, as it departs from the lax treatment of errors. If your Jave Script code does not load or execute for any of the many reasons, your users will look at a blank page. 

The solution that Resilient Web Design proposed to this is “Progressive enhancement.

With this concept you provide a minimum viable experience, that you will build upon (enhance) using feature detection (NOT browser detection) and the lax way that browsers treat new code, to improve the experience that the end-user is having. 

Using feature detection means that your design will NOT look the same for everyone. I cringe and this thought, but relinquishing control enables everyone to consume the content, and enables some to take advantage of brand new browsers and features. It is a win-win situation. You don’t design for IE6 only because some of your userbases may still be using. Yet, at the same time, you don’t ignore that audience by assuming that everyone is running the latest version of your favorite browser. 

“If a website looks the same on a ten‐year old browser as it does in the newest devices, then it probably isn’t taking advantage of the great flexibility that the web offers.” (Resilient Web Design)

If this way of thinking inspires you, then you should read: Resilient Web Design and deeply ponder how you can implement the principles in that book into your systems. If you already used this in your design, I’d love to see it applied. Please post a link in the comments.

Automated “downtime” alerts

Do you know that frustrating moment when you realize that your website has been offline for three days? Or that your shopping cart stopped working last week? 

That moment is also valuable because you now know that something is broken, so now you can fix it. But at the same time, you wish you learned of this faster!

On a community website, this may not be an issue, as your users will let you know when the site is broken, but that is not the case for a blog, or an online store, or a landing page that is collecting leads. 

You could set a daily reminder to check things are OK, but that will chip away at your precious time, and it quickly becomes boring, so you will begin to forget to do it or begin to think that you don’t have to monitor the website anymore. 

I am all about automation, so let’s automate this! 

Google Analytics

The easiest way that is also free is to use Custom Alerts from Google Analytics. The logic is simple. You have an expected value of daily traffic (based on historical data), so you create a custom alert to let you know if it drops below that. Of course, you need to have Google Analytics installed on your pages for this to work. 

Pingdom

Another way is to use a tool like Pingdom. I have used them for a long time in the past. They no longer have a free tier, but the value you get from the service I think is well worth the $10/month they ask for it. I like Pingdom because they provide more than just “your web site is down” notifications. They provide performance analytics too, which, as we know, is a factor in how your website ranks in Google searches. 

But the real power of Pingdom is transaction monitoring. Transaction monitoring helps you know if a process is working, not just a page: a process like the signup form, or progressing through making a purchase. These are incredibly difficult things to set up tests by yourself, and you get that for $10/mo.

In House Tools

You can also write mini scripts that load your webpages and inspect the results for clues to determine if the page functions as you intend to. Since I am a software developer, that is what I use today for most of my projects. 

The downside is that you have to write these scripts, test them, and maintain them. Depending on your team composition, that may cost you more than using something like Pingdom. 

The upside is that since it is your code, you can do all sorts of interesting things with it, not just email notifications. You can use that to trigger different processes and even attempt an “auto-fix” by restarting relevant processes or clearing out the caches. 

A more powerful subset of this is writing automated tests for your web apps using a tool like “TestCafe” to simulate a user interacting with your web application going through a purchase or signup process. 

You can create custom monitoring and analytics tools to aggregate data from multiple signal sources that can provide insights not readily available in Google Analytics. For example, you can monitor how a campaign is affecting not only your website but also social media engagement across all the networks you care to track. 

The Power of Using APIs

Many years ago, I had set up my very first website. It was a Sudoku generator based on a selected difficulty level.

To promote the website, I wanted to have a newsletter so I could email my subscribers a daily puzzle to print out.

At the time, I was using AWeber as my newsletter service.

I was very annoyed with the fact that to capture the email of my visitors I would have to send them to a new AWeber page where they would fill out a form, and then instruct them to go to their email to click the confirmation link, and that would get then to a confirmation page on AWeber, and then finally back to my website.

Those were way too many clicks to get yourself a printable sudoku puzzle!

What I wanted, was a way to plug into the AWeber service, and communicate with them, on my visitors’ behalf, while the visitors were staying on my website. What I wanted was an API, which is short for Application Programming Interface.

They did not offer that at the time, so I decided to simulate one by using a “fake browser” to make it “as if” the user has opened their page instead of my mine.

I was very proud of my solution, and it worked very well for about ten days until my account was banned for violation of terms of service.

Today they do offer an API, so I don’t have to resort to “shady tactics” to keep the users on my page.

I use this little story to make it evident why APIs are so powerful. I am all about automation and integration and the APIs make all this possible in a way that is reliable and makes sense and does not violate any agreements 🙂

I don’t think it makes sense to create an online service in today’s world and not to develop an API for it. Interconnectivity and interoperability increase the rate of adoption of your service. And you open it up to be used in ways that you may not even have imagined before and if you connect it, for example, to a platform like Zappier.

In conclusion, I feel that all software development is moving towards building APIs that will talk to each other. Even the front-end of websites will be a templating API making requests to a back end API.

This change will bring about dramatic shifts it what software developers do and will open the doors for non-developers to be even more expressive and sophisticated in their creations. Add AI to this mix, and we can only guess at the limits 🙂

Event-Based Programming

After you work long enough on software projects, it will become self-evident why complexity is your enemy. Pieces of code that are highly dependent on each other will result in a maintenance nightmare. You cannot change or upgrade anything without risking to break the different parts that are tightly connected to it. 

The solution I have found that works best is “Event-Based Programming.” I did not invent it; it has been around for a long time. I discovered that adopting this pattern has made maintenance much more straightforward. 

In a nutshell, your program is no longer a collection of functions that call each other in an ever-increasing web of complexity. Instead, you have components that talk to each other by raising or listening to events. 

This breakdown allows you to change each event generator or event listener individually, and as long as the event format does not change, you don’t risk a break down in communication. 

An event generator will say: “Hey, something interesting has happened, and here are the details.” And it does not care what happens with that announcement. It could be that nobody cares, or it could be that many will take action on that event. 

An event listener, on the other hand, does not care how an event was generated. As long as something interesting happens, it will act on it. 

This decoupling makes debugging super easy too! Because you can test components independently by merely looking at the kind of “chatter” they generate. 

If you’re reluctant to adopt “events” in your codebase, now it’s time to make the jump.