In recent years, payment API providers have made integrating payments much easier than it used to be. Instead of dealing with banks and exchanges, ecommerce apps can integrate with payment gateways that will allow accepting any form of credit cards, and most payment methods such as Apple Pay and PayPal. Large pdfs with instructions manuals are replaced by intelligent documentation sites with walkthroughs and tutorials. Despite that, it is not uncommon to hear developers referring payments as their least favorite part of the development process. Payments integrations are often seen as a necessary evil, to be done once, and hopefully be forgotten thereafter. Often the reasoning is that investing in better payments integration is often not a profit center for companies.
I have worked the last few years in the online payments industry, building APIS, sdks and reliability tools. While payments integration has gotten easier, developers still do make mistakes which are easily avoidable. Here are some of the best practices I would recommend for testing payments in your applications.
Isolate interation between application code and payment gateway in a package: Once an app grows to a certain size, it may have different ways of interacting with payments gateways. You may be accepting recurring payments and accepting webhooks from the payments provider, just in time checkout or interact with point of sales systems. Having your own package that abstracts out interaction with the payments APIs can help centralize all outgoing requests back and forth with the payments API. You can add your own logging and monitoring, stub out the interaction with payments API to have faster unit tests and centralize knowledge about how you serialize and deserialize messages from and to your payments provider.
Sandbox Testing: Most payment API/gateways expose a sandbox environments where you can test out a real integration with the API without moving any money. Ideally your integration tests running continously in Jenkins/Travis/Circle CI should be hitting those endpoints.
Monitoring: You should monitor your sandbox integration as well as your live system. What does the graph of 200s vs 400s HTTP response codes from the payments API look like? Are you getting unexpected 400s? How about 500s? What does the response times look like?
Automated QA: To avoid putting undue stress on your computation and database resources, background tasks are common strategies to do break down calculations for common payments needs such as reporting and analytics. When calculations are done in partial chunks, automated jobs that test whether those calculations have been done properly can reduce a lot of load for your support and developers when something goes wrong midway between a job, or failure.
Negative/Failure Testing: Special card numbers provided by payments providers can help you recreate payment declines due to potential denial from processors for reasons such as not enough funds in account. You may also be able to test for rejections due to fraud and compliance. This helps lower the range of potential unknown errors your site may run into, especially when expanding to new markets or accepting more payment methods.
Live testing: Live testing against payment providers is often tricky, and can led to accounts getting shut down if there is undue load on the API. Despite that, some testing in live is absolutely necessary before you can be confident that on release day, your integration is working as expected.
Test for absence of sentive information: Storing user information such as credit card number or passwords is a very common way of violation of PCI compliance. Regex patterns can be used to make sure that neither your logs nor your database is storing sensitive information.
I intend to write more posts in this series, covering topics such as considerations before and after going live with payments, when scaling up and so on. If you liked this post, please share or comment.
If you have feedback on this blog post or integrating payments, please feel free to reach out!.
With the rise in VC funded startups, there was not a big community for individuals and small teams launching and supporting digital product businesses with their own profits. Rob Walling and Mike Taber noticed that need and created MicroConf, a conference for self-funded software startups.
I was at MicroConf Starter last week. In its 12th year, MicroConf split into Starter and Growth Tiers, the starter edition for people who do not make a full time income from digital products. If you are the kind of person who enjoys taking an idea to a functional product that solves real world problems, MicroConf is a conference where everyone has that shared goal.
Some at MicroConf have launched products and were doing quite well from a digital product business, let it be online courses, software plugins, SAAS etc. There were also number of people who wanted to learn more about find the right idea, product-market fit, sales and marketing.
I really enjoyed the pragmatic voice of the conference, keeping focus on balance. The conference does not shy away from the fact that it is not a easy task to bootstrap software products.
MicroConf has great notes for the whole conference. Instead of trying to go through the whole conference, here are some of my takeaways from MicroConf Starter 2017.
Consistency: Rob Walling emphasized start of the conference that the success of MicroConf will be what these two days can do for the remaining 363 days of the year. Often consistency made the difference in the eventual endurance and success of the product. Josh Doddy’s blog was fairly dormant for the first 12-14 months but peaking exponentially near its current runtime of 18 months. Mastermind groups were mentioned as a great way for a group of people who help each other stay on track.
Finding an idea worth building: Multiple speakers mentioned the need to take a hard look at your stocks and assets. What questions do people keep asking you? What are you passionate about that other people find boring? What would you from 6 months ago find valuable? All these were from Ben Orenstein’s talk, one of my favorite at the conference. Patrick Mckenzie also touched on the same topic, to double down on what you do very well already and what the market already buys from you. Justin Jackson mentioned the need to find the groups you are best equipped to serve, and to research the audience and find ideas rather than thinking in your own head what the problem could be. Mike Taber also emphasized focusing if you are in fact the right person to be building that product.
When to launch: An MVP should solve a well defined problem, not solve a portion of it or solve every possible iteration of the problem. However, the lack of polish is intentional to see how much inconvenience the customers would endure to solve their problem. Justin Jackson had a great point that an MVP should be the smallest product you could build to disprove a hypothesis. It was interesting to see multiple speakers mentioning the importance of putting your face right by your product, to encourage trust and take responsibility of what you are delivering.
On user acquisition: Probably the biggest concern of fledgling products, user acquisition/outreach had dedicated talks. Some of the key points where to focus on conversations with users and doubling down on a few approaches e.g. SEO, Content, Ads rather than throwing in the kitchen sink. Looking for integrations with other products by forming partnerships was a common theme. Key questions to ask users were to ask how they were solving the problem today, what they have tried or ask for introductions to someone with that problem. In the beginning, unscalable strategies such as concierge onboarding are useful, specially for SAAS products.
Getting results: Users are the best signals here, and if you have to chase people down to use your products, it may not be solving a real problem. Google analytics charts showing growth and conversion rates were part of almost every presentation. At the same time, reading those charts can be a hard story, often showing charts recovering from a flatline or decline to eventual success because the founders believed enough to carry on. Its hard to think convictions to be infallible in the face of data however, and sometimes it is time to give up.
The Internet started as a publishing medium. It excels at enabling people to share their unique gifts. An amazing amount of content gets put out on the web everyday, far beyond someone can read in a lifetime. Massive amount of information also means information overload.
In software, it is common to use web tutorials to supplement one’s learning of a particular material. Following guidelines of someone who has already done it can really fast track development. Tutorials appear in various forms in the web. Some of the common formats are long form blogposts, videos and series of email newsletters. Some enjoy the personal touch that videos can offer; others enjoy being able to quickly skim a blogpost.
When you want to put your hard earned knowledge and valuable time into writing a tutorial, there are questions worth thinking about. Does the tutorial cater to its intended audience? Will it reach them? Can someone quickly skim the content and still get value? Is there a way to measure the effectiveness of online tutorials?
To look for answers, I asked a carefully selected group of 150 people about their preference of tutorials. The focus was on software engineers/designers/entrepreneurs due to my familiarity and experience with the field. What emerged from their responses gives a blueprint for creating great online resources.
Real time feedback: If exercises or examples accompany a tutorial, multiple survey responders emphasized the need to check the user’s responses and provide interactive feedback to the user to guide them to the solution. Good examples are checkpoint multiple choice questions during Coursera/Udacity courses that you must complete to move forward with the course.
Follow up: Several responders emphasized the need to provide contact information or other means to reach out to the author once they went through a tutorial. Providing a comments section or your email/twitter handle are great mediums for a reader to follow up.
Address a concrete problem: Staying focused of a specific problem gives a tutorial depth. A differentiator can be to classify for the user whether the focus is academic and structured vs. a blogpost focused on solving a particular problem right now.
Working examples: Interactive, working examples close to the content that builds on top of each other makes following along simpler. Michael Hartl’s Ruby on Rails tutorials came up as an example of detailed, comprehensive tutorials.
Advanced user tutorials: Several user’s pointed out the need for tutorials that addresses advanced content. Making expectations clear about experience levels of intended audiences can be a huge positive for tutorials. Ray Wenderlich’s iOS tutorials are good examples of the detail and level of research and understanding that can happen before putting out content on the web.
If I were to think about running a 26.2 mile race at the start of 2015, the overwhelming feeling would be one of fear. I have only ever ran a 10k before and just signed up for the Austin 2015 half Marathon, my first ever half. The prospect of running twice the distance still seemed far away though. Flash forward to October 18th 2015, I finished my first Marathon with a 3:49:00 time. As I look back on this year, I wanted to put together some of my realizations during the whole process.Â
Running is a privilege:
Living somewhere where I can run safely, have trails to run on, be in good health to run are all privileges to be thankful for. Growing up in the developing world meant that it was hard pressed to find opportunities to be involved in outdoor activites. Having the time and space to exercise is a luxury that needed to be earned. I never ran in high school, and by the time I graduated college I could not run longer than 5k. Having the time to run, being in US where running is very much part of the culture has been a huge contributor to my running progress.
Joining a running group is one of the best decisions you can make as a beginning runner:
Training with people better than you to improve is not unique to running. Start of 2015, I had no plans of running a marathon. In April, I joined the Austin Runners Meetup (ARM). In training long runs with ARM, I was able to build up the endurance for longer runs which made the progression to a Marathon much easier mentally. Training with other runners can definitely help you maintain the habit of running as well as improve your form and performance. Moreover, I found a new community of great people which has been very rewarding.
Tying running with other activities you enjoy can make running much more consistent:
In Charles Duhigg’s Power of Habit, he talks about the cue-action-reward pattern that most habits follow. Being aware of that pattern can help in building running as a habit. When run’s are followed by a delicious breakfast, you have something to look forward to. Trance music and podcasts help me maintain the flow during running. Travel is one of my favorite things and going to a new city for a race is something I eagerly look forward to.
Running is a blissful release from life’s distractions:
âAll I do is keep on running in my own cozy, homemade void, my own nostalgic silence. And this is a pretty wonderful thing.âÂ
â Haruki Murakami, What I Talk About When I Talk About Running“`
Distractions are part and parcel of our lives as more form factors emerge competing for our limited attention span. Often, this results in us not being aware of the passage of time. Running has been a great antidote to that problem for me. When running is effortful, you have to concentrate on the activity at hand and your entire focus is on the present moment. Long runs offer the prospect of seeing places and neighborhoods that you would not frequent otherwise. A slight breeze on a scorching summer’s day has never been more enjoyable.
Once running is a habit, a chore becomes a craving:
When you start out running, it can be something you dread on your calendar. There is really no way to get past this without sustained practice, lowering the cognitive load with group associations and reward mechanisms. If you continue though, you realize at one point that you start craving the runs. Don’t get me wrong, it still requires a lot of mental effort to get up at 5:30am for that 20 mile run you need to do for Marathon training. But something about the combination of endorphins, habits built during running and seeing your friends out there on the trail can change running into something that you look forward to.
It is fun to start side projects, but not always the easiest to stay committed and finish. Coffee has powered more projects than one can count. What also helps to stay focused and motivated is the company of like-minded individuals. Coffee and friends results in more and better products.
What can one do to reach more of those people?
Coworking spaces are hot in this economy. WeWork have recently stepped into the $10 billion club. In Austin, Capital Factory, Chicon collective, General Assembly and Link are just some of the spaces ranging from Accelerator/Incubator to just rentable office spaces. However, they are much more fitted for startups working out of a coworking space. Moreover, one spot can get boring pretty quickly.
What if you are a remote worker or work on side projects and wish there was a community you could cowork/share with?
In Austin, there are a few options for that as well. There is a great group called Cafe Bedouins who meet Tuesdays at 7pm in Houndstooth cafe to work on projects. I had a great time there, but thought why would this need to happen only on a particular weekday on a particular time? Weekends are often the times when side projects gets the attention anyway. By a stroke of luck, I ran into Adam Coulon at Cafe Bedouins, who is also really enthusiastic about coworking, and shared the same problem.
We considered platforms where people could spontaneously decide to meet and cowork on projects. Slack was the clear winner. It is really easy to set up, and it has spread like wildfire so people tend to be familiar with the product. Without futher ado we present Coffeehouse Coworkers, made with rauchg’s excellent slackin pluigin. It’s a slack channel for people to find others and decide on places to cowork!
At the highest level, we love products and want their to be more finished products out there. It could be a blogpost, design concept, open source software or your consulting business. We think that better products happen in collaboration with like minded people. This may not be everyone’s cup of tea (or coffee rather), but our hope is to enable some people to optimize their workflow in a low commitment way.
If any of this interests you, join us at Coffeehouse Coworkers. Right now, the members are mostly Austin based and in tech since that is our current demographic, but it does not really have to be limited to that. Austin is a great place to start since it has a lot of independent workers and great coffeeshops.
This is the second part of the two series blog post about my talk at PyCon Canada. Here is the first part.
Proposal:
Are you part of a team responsible for delivering cross platform products? End to end automated testing and communicating effectively are important when your project depends on multiple teams spread across functional domains. At Braintree/PayPal, we worked on a framework to reliably ship developer facing products. We will go over using BDD with Behave to describe test scenarios that speaks to both product and engineering, using Appium for mobile automation, and ElasicSearch and Kibana for storage and visualization of test results. You will see how the breadth of packages available on PyPI, the flexibility and ease of Python helped a team of developers whose core competencies were not in Python to collaborate on a common ground.
If you are giving a talk, too much content on slides means the audience is reading the slides instead of listening to you.
Design your talk expecting failure, and assume things like wifi will not work. An analogy would be the non functional escalator still being a staircase.
Show your talk to as many people as time allows. Every time I showed my talk to someone, I would find a new way to make the talk better.
It was amazing to hear from a 10 year old about his experience in coding. The barrier to entry to tech will keep falling in a noticeable way.
Teaching remains the best way to learn alongside with building things.
Coding for expectability is often as important as any considerations in a software project.
Science, data, web, systems and infrastructure were dominant themes at PyCon.
Getting to Toronto
It was really exciting to have my talk accepted at PyCon, since it was my first time speaking at a conference.
Getting through customs went as painless as they could have.
Toronto was colder than Austin, big surprise! Reminded me of times back in East Coast.
T-mobile data roaming was a breeze to set up, and worked mostly well across different providers.
Toronto had different modes of public transportation getting from downtown to airport: buses, streetcars, subway. Makes a city more interesting, although makes day to day travelling more complicated. Although, it does not take a whole lot to put public transportation in Austin to shame.
Asked a lady on the Subway for directions. It soon turned into a great conversation with her and her husband about life in Toronto and their experiences in US. For a big city, Toronto scores major points for having friendly people. Canadians have a reputation of being polite and helpful, and I would come to recognize it throughout my trip there.
My AirBnB was in Kensington Market, close to UoT where PyCon was taking place. It was a vibrant neighborhood, bars, restaurants, transportation nearby. My room was no hotel room, but a bed was all I needed.
Saturday
Morning started with me feeling the stress of not having all my slides and examples ready. I wanted to take some time to reflect on the great feedback I got from my team, but there was little time left.
Adding to my anxiety was the wifi connection not working. Thankfully, some organizers helped me out. Once the certificate issues resolved, it worked well for remainder of the conference.
Continental breakfast consisted of an assortment of cottage cheese, granola/yogurt, muffins, bread and coffee. No complaints.
Talked to Dusty, a Facebook engineer working on the Facebook infrastructure in Portland. Having lived in Canada, he had a lot to share about his experience there.
Morning keynote explored the history of Python interpreters and went into benchmarks. Benchmark related conversations can get subjective, but the speaker did a good job avoiding that.
Talks on application security, Emmy nominated CGI(!) and Docker deployment followed. The CGI talk was offered a very different viewpoint in software problems. Being highly computation intensive and long life cycles means the tradeoffs are very different from the usual SASS app/consumer product.
Update: Udemy has generously granted a free coupon for the readers of this blog for their React JS and Flux course. Use the code avidasreactjs and the first 50 readers will get free access to the course!
The importance of delivering realtime feedback to users is more than ever. Gone are the days when chats or games were the only applications of realtime software. Starting from finance, advertising or education, having a realtime component to your web application will elevate the user experience.
Socket.io
From socket.io’s homepage, it is a library that enables real-time bidirectional event-based communication. It has two parts, a client side library that runs in the browser and a server side library for node.js. In recent times, this has become the de facto way of doing realtime web applications in the node.js world. Key reasons behind this has been the way it abstracts away the overhead of maintaining multiple protocols, while carrying on similar primitives from Node streams and eventEmitter. Some of its other powerful features include being able to stream Binary data, broadcast to multiple sockets and being able to manage connected client data from the server.
Architecture
The WebSocket protocol is a W3C standard that enables interactive communication between browser and server. It functions as an Upgrade request over HTTP 1.1. However, since all legacy browsers and devices do not have support for WebSockets, it’s cross-platform abilities get limited.
Socket.io itself is a library to build realtime applications. It will try to upgrade to and use the Websocket protocol if available. Socket.io depends on another libray called Engine.io which exposes a Websocket like API but provides fallbacks to other transport mechanisms such as XHR and JSONP polling. This enables application developers to write realtime codebases that are browser, device and transport implementation independent.
Getting started with Socket.io
This tutorial assumes that you have Node.js, npm and Express on your system.
In a directory create two files called index.html and app.js. In your app.js file, add the following
1234567891011121314
varapp=require('express')();varserver=require('http').Server(app);varswig=require('swig');varpath=require('path');// view engine setupapp.engine('html',swig.renderFile);app.set('view engine','html');// server and routingserver.listen(8080);app.get('/',function(req,res){res.render('index');});
We set up the view engine and serve up a basic index page. If this part looks unfamiliar, please check out Express docs. Now add the following in app.js.
We create a new instance of socket.io and pass in the created express server as a parameter. As the server listens, whenever a new client starts a connection, we emit an event called server event and send the payload { foo : ‘bar’ }. It also listens for ‘client event’ and logs the payload once it gets the event.
It includes the client side socket io library. After instantiating a new connection, it listens for the ‘server event’ and when that event happens it logs the data and emits ‘client event’ and sends the payload { socket: ‘io’}.
Run node app.js and fire up localhost:8080 in your browser. On the terminal you should see { socket: ‘io’ } and on the console you should see { foo : ‘bar’ } printed out. Congrats, you just did your first Socket.io app!
Useful Socket.io Concepts
Message sending/receiving
Socket.IO allows you to emit and receive custom events. Besides connect, message and disconnect, you can emit custom events and send with associated payload. Emit and Broadcast are ways to send events and on is the event listener.
Server vs Client API
There are some common functions between server and client side, but it is worth looking into the docs and understanding what is possible on the server vs client. Generally, the server side has much more features and capabilites and is capable of creating rooms and namespaces but both sides and send and respond to events.
Rooms and Namespaces
Socket.io provides built in abstractions to demultiplex the connected clients. Namespaces, identified by a path, can be connected via the following
12
varsocket=io();//connects to default namespace "/"varadmin=io("/admin");//connects to the namespace specified by the path "/path"
After a client connects with var socket = io('/admin') , we can send message only to the admin namespace.
1
admin.emit("admin alert","website traffic is up!");//the event will only be sent to the clients who connected to the admin namespace
This enables more role or other criteria based distribution of socket.io events/messages within your application.
Rooms provide a way to further divide up clients within individual namespaces. Clients within a namespace can join and leave a room. By default, a client always is connected to a room idenfied by the sockets id. Hence it is possible to send targeted messages to a connected client via socket.broadcast.to(<SOCKET.ID>).emit('test', 'message'). Rooms could make more sense for particular themes whereas namespaces seem to fit well for user type/responsibilities.
React and Socket.io
Now for the exciting part, integrating React.js and Socket.io into an application. React.js is Javascript UI framework from facebook. You can follow some of the initial docs to get started with React. This tutorial will not go into great detail into the terminologies of React.js but refer to the official documentation if any of the React syntax looks confusing.
The basic idea of the app is to have an html input and a label. When someone types in something into the input box, it will update the label for anyone else who have an window open except for the person typing.
Client side code
Let’s start by changing your index.html to the following
On the client side, two React components called Input and Label are created and mounted by calling React.render. Input renders an html input box which calls the notifyServer method whenever the someone types into the input field. The notifyServer method then emits socket.io event called ‘client event’ with the value of the input box.
On the server side, when ‘client event’ is received with the data, the server calls socket.broadcast.emit and passes the data payload along. This means that all the connected clients except for the socket that generated ‘client event’ will receive the ‘update label’ event and the payload. This sends the message to everyone except for the person typing.
Back to the client side, the Label component consists on a div with a h2 element with is set to the serverValue state of the component. getInitialState sets the initial value to be ” so initially the Label is empty. When ‘update label’ is received, we call the _onUpdateLabel on label, which is an instance of Label. It sets the serverValue state of the Label component to data.value. This invokes the render method of the label component, and it generates a h2 header with the updated value of the serverValue.