Well, mostly. I certainly do at work and insist that the other guys here do. At home, well, I’m just having fun really
Today I’m using Rails/RSpec.
In the past I’ve done phpUnit and Symfony/Behat, which was just great. Though it was mostly based off stories, so you’d probably call that BDD.
I think the main problem with it is knowing when to stop. For the most part, you probably only need functional tests. One can easily think that EVERYTHING needs testing, which is just going to mean every feature takes far longer than it should have to create. It can also mean you’re more reluctant to throw away code if you spent ages both writing it and tests.
I think the problem with the PHP community is that most people don’t know even know that tools such as tests, and methodologies such as TDD, even exist. I remember first hearing about it in 2010, when one of my co-workers mentioned TDD, and the fact that we should be doing it, but wasn’t able (at the time) to explain what it was, or why it was beneficial.
It wasn’t until I started using Laravel 4, and listening to the things Jeffrey Way was saying, that I understood what TDD was, and why it should be used.
To my shame, I’ve only done a very small amount of TDD, mostly because I’m not afforded the opportunity to do so at work, and I only have limited time to focus on my own stuff. But I am resolved to learn it and use it.
So far I’ve never actually worked anywhere that did testing. Quite often in an agency atmosphere timescales are too short to allow time for such things. Generally these types of establishments are run by marketeers and as such have no real idea of these things. They almost never have the foresight to realise the problems that could be averted by testing.
Just like Daniel I started tinkering with testing while learning laravel but I really haven’t done much.
I think this is one of the fundamental flaws in people that don’t understand the benefits of TDD. Anyone that has done true TDD (which, I’ll admit, isn’t me, but I’ve seen it in action) will know that TDD doesn’t take any longer, and in most cases will actually take less time.
If you’re writing a class - which is quicker:
planning out what the class needs to do (writing the test), then implement the class via the use of a sexy red/green dot (which fires automatically on file save).
blindly bash the class together together, then (in the web development world at least, I don’t know how this works in other industries) move to the browser, hit the refresh button, wait for not only that class, but all the other classes it is related to, load and run, and guess if it’s doing what it’s meant to.
Telling a developer they can’t do TDD is like telling a PHP developer they’re not allowed to use xdebug because of time scales, or a frontender they’re not allowed to use Chrome developer tools because of time scales. It’s nonsensical.
I do try to do it, but certainly not religiously. For me, it’s all about trying to ensure the test has value. With client work it’s a fine line to walk. I used to think that we should make a big thing of it with clients but honestly they don’t care, they only care that the software we’re producing is valuable. My job is to make sure that it is in a timely manner with a reasonable level of quality. Tests are part of that equation.
The thing most people don’t realise is that it will take longer at first to learn how to do it, and learning what to test and what not to test. Then you will get faster and more able to produce better software quicker. I am still learning this myself, and probably will be for many more years, but I now recognise when I wished I’d taken the time to write tests first (or at least at the time I was writing the code) but didn’t because I thought I didn’t have the time. I then spend more time debugging it and fixing issues afterwards.
I try to implement TDD more and more nowaways. I do find it hard though particularly when I’m working to a tight deadline. I’m better now but usually I find that any form of proper testing (whether its TDD or standard unit testing) will increase the development time somewhat. It’s that all-important balance between quality and deliverability that is sometimes hard to get right.
I’m a massive advocate of TDD to avoid piling technical debt on to a project. I find project managers in agencies tend to respond better to it when I explain it as a “Technical Credit Card”. You can skimp on TDD now, but it’ll cost more later. For the most part, they’ve agreed (eventually).
I use just the standard testing libraries in Rails and Kiwi/Nocilla for Objective-C, I’ve found that they work really well.
We implemented the approach of TDD and BDD at work like 6 months ago and the benefits are enormous. For a start it makes the developer think much about whats expected and the fail scenarios. Also it gives the developer a sense of ownership over a piece of code/functionality as any changes down the line would get caught in the test cases in the system.
We apply two layers of safety net, first by JUnit test cases and the second by putting in Acceptance tests (BDD) using Thoughtworks Twist framework.
I am clearly favor this approach and the gains are there to see. The whole process gets streamlined and once the developers understand this approach, its as simple as a few steps.
Takes a bit more time and effort but if it helps prevent goofs, why not.
Yes and no. Sometimes we write the test cases before coding, other times after we’ve written the code.
Test::More (as well as many other Test::* library packages frrom CPAN)
Helps to focus the reason why the fix or feature is needed, as it can become easy to get distracted and improve things more than is wanted.
Within the Perl community, testing is a big thing. I run CPAN Testers, which tests almost every piece of code submitted to CPAN on 50-100 different platforms, and 30-60 different version of Perl. As a consequence, this has often followed through to many of the companies (e.g. Booking.com) who are activity employing key figures from the Perl community.
TDD is still being used, its perhaps just not be talked about as much, as there is already so much information out there in web world about it.
This might be a beginner question, but I’m wondering if you (anyone who reads this) goes a step beyond TDD. For example, if a test fails, does it simply alert someone of the failure, or does it revert the whole system to pre-failure?
I can imagine the latter is important in mission-critical situations, e.g. temperature control at a nuclear plant, and also useful because it prevents meddling hands from derailing (no pun intended) the system.
EDIT: I just came back to edit this post, realising the mistake, but @LimeBlast beat me to it.
Based on the question you’ve just asked, I’m wondering if you fully understand what TDD is? (I’m no expert, but the language you’re using doesn’t quite seem to fit - although, I could be wrong)
TDD is a development methodology which takes a specific approach to development, generally via a pattern known as Red-Green-Refactor:
(I’m not sure if this video does enough to explain it out of the box - but let me know what you think)
Anyway, so after you’ve written your tests, and everything is returning green, you’re ready to deploy - but as soon as it has been deployed, TDD’s job is done.
That’s not to say that things can’t go wrong later (and they often do), but it’s not TDD’s job to find them, that instead, I think, would probably fall to something like New Relic (or some other type of server monitoring software).
@nick you’re thinking of continuous integration and continuous deployment. They can be an important part of the TDD process.
As test suites grow, they can start to take a long time to run. It means that you don’t always run the whole test suite, rather just a subset that refers to what you’re working on. You can then hook up some continuous integration stuff with your source control that makes sure all tests are run when, say, you push to a new branch, or create a pull request.
We use CircleCI with Github. Really simple to set up. That makes sure the tests are always run with loads of visibility. For most projects, we also have CircleCI run deployment on certain branches when a PR is merged and all tests pass. Like, merges in to develop deploys to staging, master deploys to live.
Depending on the break of course, but we tend not to roll back to previous commits. Rather, if you break a branch with a PR, just fix it. It won’t deploy until it’s fixed anyway.
You wouldn’t use continuous deployment in any critical bits of the nuclear industry. More likely it would run against a model for 6 months once you’d mathematically proved there were no bugs. When I was there they didn’t even use standard commercial operating systems; too much to go wrong.
I had a job running the servers for a big team of developers who were writing a QA system just for the parts used in nuclear stations. They know the name of the people who made each bolt When you hear about a crack in a pipe on the news, they’ve usually found it with X-rays or ultra-sound in a routine check. They’re really quite careful.