I've been using Pipenv for the last few months and my biggest issue is that "--keep-outdated" has been broken in the latest release (2018.11.26) for a while. I've needed to install Pipenv from the master branch to make it functional. However, the last time I used "--keep-outdated" from the master branch, it wouldn't automatically update the hash of the dependency being updated.
Updating specific requirements is something I need to do pretty often, and it's not fun to explain all the Pipenv quirks to the team.
Pip-tools looks like it does everything I need and has fewer quirks, so I ended up making the switch.
Pipenv uses pip-tools under the hood, so the migration to pip-tools was very smooth. The migration process was:
Copy the dev-packages and packages sections of the Pipfile to their own requirements.in files.
Run pip-compile
Copy over the specific versions and hashes from the Pipfile.lock to the generated requirements.txt.
I did have a small issue where updating a specific package with pip-tools removed a bunch of dependencies from the requirements.txt unexpectedly, but running pip-compile with "--rebuild" fixed it.
I added "autoplay muted" to my <video> tag to make a video autoplay in a carousel. It worked on desktop Chrome or Firefox, but didn't work on iOS Chrome or Safari.
Back to Route53 - Create an A record for both www.heckingoodboys.com and heckingoodboys.com using the alias to their respective buckets. (this will be the first option in autocomplete)
Why not just use a CNAME from www.heckingoodboys.com to heckingoodboys.com? AWS says they don't charge for aliases, but they do charge for CNAMEs. So, I used an alias to a bucket instead.
Here are a few things I've learned while working on a project that uses Sphinx search:
It's important to know the difference between fields and attributes. Attributes are basically unindexed columns and you should try to avoid filtering only on these columns. Fields support full text search.
It supports its own custom binary protocol and the MySQL protocol (recently they also added a HTTP API). When you see "listen = localhost:9306:mysql41" in the config, that means it's listening for MySQL protocol traffic on port 9306.
https://github.com/a1tus/sphinxapi-py3 appears to be the best Python client for the binary api at the moment. This doesn't support INSERTing things into the index (you'll need to use the MySQL protocol for that).
It does not match partial words by default. Turning on partial matching can also increase the size of your index dramatically. You can also limit the fields that support partial matching with the "infix_fields" and "prefix_fields" setting.
Stemmers aren't turned on by default. So, searching for "dog" will not match "dogs".
Most special characters ($, @, &, etc) are ignored by default. You will need to add them to charset_table if you want them to be searchable.
You will need to use a real-time index if you want to INSERT/DELETE records immediately.
If you're using a real-time index, you will probably need to increase the "rt_mem_limit" from its default of 128mb. If this limit is too low, you'll see a high number of "disk chunks" when you run the "SHOW INDEX rtindex STATUS" query. More info: http://sphinxsearch.com/blog/2014/02/12/rt_performance_basics/
I had a unique constraint on a VARCHAR column and I inserted two rows with the following values:
"name" (without trailing whitespace)
"name " (with trailing whitespace)
To my surprise, I got a duplicate error on that 2nd insert. It turns out that MySQL ignores that trailing whitespace when it makes comparisons.
The MySQL docs say this: "All MySQL collations are of type PAD SPACE. This means that all CHAR, VARCHAR, and TEXT values are compared without regard to any trailing spaces. “Comparison” in this context does not include the LIKE pattern-matching operator, for which trailing spaces are significant." (https://dev.mysql.com/doc/refman/5.7/en/char.html)
The solution? You should probably be trimming trailing whitespace in your API endpoints and on your front-end.
If you use gevent with requests.get on a HTTPS URL with the default verify=True enabled, you'll see almost 2x longer execution times than with verify=False.