Last time I wrote a bit about tests, linked you to other writing of mine about tests, and instructed you to add a
requirements.txt file to your repository.
If you are new to software engineering, you probably have had some sort of headache having to do with git, your text editor, your directory, the command line, or something similar. Just because I'm not explicitly addressing all of these does not mean I don't care. I hope Google is sufficiently helpful. If not, I'm not a hard person to get in touch with.
Today will be a lot more of that sort of thing. Such is life. The good part is that it's also interesting.
It's a strange-sounding task: write bad tests. Why am I suggesting we write bad tests? Because I do in fact write bad tests at this stage of a software project. The order in which I'm doing things here is, generally, at least roughly the order in which I'd do things any other new software project.
Although it's best to write durable tests and keep them, bad tests are a great way of making sure that your first pieces of code work as they should. You can think of them as putting the key in the ignition and seeing if anything on the dashboard lights up.
Put another way: at this stage, you will, one way or another, write a few lines of code just as a sanity check or primitive smoke test. It's better to preserve that code in a file of tests, where it can be run again and again when you want to make sure that recent changes haven't broken anything, than to throw it away at the end of a REPL session or leave it in an ad hoc script.
We'll call this file of bad tests--which are actually good sanity / smoke tests that would be very ugly if presented as a mature test suite–
test_task_crud_operations.py. The "crud" in the file name stands for "create / read / update / delete;" CRUD is generally an acronym indicating basic data-persistence operations.
"CRUD" also carries connotations of primitive, base, and obvious work, and you will sometimes hear it used pejoratively. But clean CRUD apps are lovely bits of craft that create oceans of economic value. They make my heart sing.
So! Here's the plan for today:
1. Install pytest;
1. Put some tests in
1. Do some stuff having to do with imports, completely punting on the actual business of learning about imports;
1. Run the tests with Pytest.
Going in order:
Install Pytest by running
pip install -r requirements.txt from the
veery/ directory. Lots of help is available if you have trouble. (Thanks, other people on the Internet!)
Here's what we'll put in the test file:
from main import get_all_tasks, remove_task
from random import choice
def test_get_all_tasks_has_tasks(): assert get_all_tasks()
def test_removing_nonexistent_task_leaves_task_list_without_that_task(): nonexistent_task = ''.join([choice('abcdefgh012345') for _ in range(12)]) remove_task(nonexistent_task) assert nonexistent_task not in get_all_tasks() ```
Going line by line:
from main import get_all_tasks, remove_task
We can include functions from
main.py here. Managing various kinds of complexity is the intellectual core of software engineering. One aspect of that is keeping code in different places and making it available where needed. Python imports are tricky; I'll write about them, just not here. For now, just know that (under appropriate conditions) doing this will let you use functions from
main.py in files that are not
main.py (e.g., this test file).
from random import choice
Python has a huge and excellent "standard library" that also allows you to import a bunch of functions you didn't write. One of them,
choice, makes random choices for you. We'll be using it in a test.
One great thing about Pytest is that the test functions are Python functions you know and love, that have names beginning with
test_, and that Pytest knows how to do special things with.
When we run
pytest (in a place where it can "see" this file), it will find these two functions, run them, and see if any of their assertions fail. (It's a good time to learn what an assertion is.)
The practical effects of this verification are (i) to verify that
get_all_tasks() can be run without causing any errors and (ii) to ensure that there are some tasks in the list returned by
get_all_tasks(). (That's because the empty list is falsey in Python.)
And here's one reason why this is A Bad Test: if we ever complete all our tasks, this test will start failing! For now, this isn't so bad that we should write something better. The fundamental problem is that we don't have separate test and "real" task lists; the deeper problem is that we aren't cleanly separating the "object level" and "persistence level" of our tasks.
That is not supposed to make sense yet. I'm just reassuring you that: 1. The problem here is too deep to be resolved by making our test somewhat more clever; 1. It's worth having this test anyway, just as a temporary sanity check; 1. The next things we do, despite sometimes being considered advanced software topics, are things you are in fact equipped to understand. That is: we will in fact clean this all up, enough to give ourselves sturdy foundations for the indefinite future, but not so much to constrain or unduly postpone the rest of our work.
It's OK to have really long names.
First we create a random name that is almost certainly not the name of any task in our list:
nonexistent_task = ''.join([choice('abcdefgh012345') for _ in range(12)])
(No need to study this line of code in detail if you don't feel like it; it makes a 12-character string where each character is a random selection from the set abcdefgh012345. If you do want to study this, search for information about the
join method on strings and the
Now we "remove" that task.
There's always a question about what to do with an impermissible, unexpected, or semantically strange input. Here we make
remove_task check for any instances of that task (that is not in the list, contra connotations of the English word "remove"). When it doesn't find any, it merrily ends its work, raising no error or warning. Maybe that's good, maybe bad. In the fullness of time there will be much more to say about that.
assert nonexistent_task not in get_all_tasks()
Now we assert (implicitly) that running
remove_task didn't crash the program and (explicitly) that it didn't somehow add the task to our task list.
Accidentally adding a task in the course of trying to delete it might seem like a sort failure so remote as not to be worth testing for. Well, maybe it is in fact that remote. But what if you tell a child to turn the porch light off, but it's already off, and they dutifully go to the switch and flip it, inadvertently turning the light back on? Some software failures are like that.
Add an empty
conftest.py file in the
Why do you need to do this? Why does this work? I swear it all makes sense, but for now please do feel free to treat it as a bit of magic. Part 9 is long enough already. (But if you're curious, here you go.)
Next post: Python task manager from scratch, part 10: Task objects