#api testing

6 messages · Page 1 of 1 (latest)

ashen axle

I don't agree that the approach creates brittle tests, on the contrary. Sure, you have to craft the tests by hand, but think of it this way - the API establishes a contract between the server and the caller, if you send "a, b, c", expect to receive "x, y, z". The goal of these tests is to ensure your app will handle its end of the deal. How would you validate your application handles errors with the approach you seem to be leaning towards? You can easily cover input errors, but how do you even simulate a billing error? Or any generic API error for that matter?

cloud cloak

by brittle, i mean that it puts the client author on the hook for knowing what the response looks like, so they're liable to make errors there, and it's non trivial to update the mocks when the API changes. if you're recording, then you just blow away the records and re-run

the mock problem you're describing exists in either case. ideally you can set up the dev environment to accurately record a situation with a billing error

anyway, even using your approach, it seems like you're required to be able to pass in the base_url to the client library, so you can pass in localhost:{port_of_mock_server}. Some client libraries, such as aws_sdk_rust, do not seem to have that parameter as an input option.

ashen axle

I think that it's rather important that the client (e.g. the developer) understands the contract of the API they are integrating with. Naturally, the developer might make an error here or there while creating the test suite. In my experience, changing the mocks is much easier than creating new recordings. And I still maintain the case that simulating API errors using a recorder is borderline impossible. Besides, API contracts don't change that often and seldom do so in a backward-incompatible way, at least not without a warning ahead of time.

I am not saying that using records for testing is a bad idea as a whole. Selenium (and similar tools) can be instrumental when changing a legacy system with poor or no test coverage. However, any approach that does not make it easy for me to test for error responses is a deal-breaker for me. And I firmly believe that's the case with records.

I am not familiar with aws_rust_sdk, but the SDK situation that you note is not uncommon, but I believe that's by design. Why use an SDK in the first place, if you cannot trust that it works? You should be placing any code that uses SDK code (or makes API calls) behind an internal interface.
Then in your integration tests, you can mock the SDK to ensure you're passing correct data. I've yet to stumble on an SDK that makes testing your business logic hard.

In any case, take this for what it is - an opinion. Should you decide to use records, I would be most interested to hear about your overall experience.