I discussed this option in the IRC channel, because I don't think it's
very clear. When I first read the list of options I was under the
impression that Pelican could do rewriting of urls. I discovered that
this isn't the case, so I discusses this in the IRC channel and got the
encouragement to submit a pull request.
reST example of the section "Linking to internal content" had incorrect
metadata. Moreover, time was missing from the date of both markdown and
reST example.
This was pointed out by [lefromage at #pelican](https://botbot.me/freenode/pelican/msg/2235275/)
This makes it easier for StaticGenerator to walk FILES_TO_COPY, where
the input may be a directory or a bare filename.
Non-traversable file types (e.g. everything but directories and
symlinks to directories) are not checked against the exclude list.
The user-level effect of this is that explicit entries in STATIC_PATHS
or FILES_TO_COPY will override a hypothetical STATIC_EXCLUDES setting,
which seems like a reasonable approach.
I also removed the Python 2.5 compatibility check for `followlinks` in
os.walk, since Pelican is now Python >=2.7.
I noted that if you set the `ARTICLE_URL` site variable to have some depth and just create relative links, they will not link correctly.
by prefacing the link with `{{ SITEURL}}/` it seems to ensure proper links are created and works with the `make devserver` mode
I hacked some code from other modules and some quick googling, first time with py so this may need to be cleaned up.
The functionality is to have a config var in pythonconfig.py and to remove the duplicates `sorted(set())`
For sites where there are no SOCIAL links defined the FEED_ALL_ATOM and/or the FEED_ALL_RSS links are not displayed.
Also update the functional tests output.
These additions are to make it easier to disable pygments or any other
extension the user may not want. In the previous version, these plugins are
hardcoded, but by making it a variable in the config, it is possible to not use
pygments or easily load extra markdown plugins if needed; you can have multiple
plugins in one virtual environment and have different configs load them as
needed.
In my `pelicanconf.py` I then have the following:
MD_EXTENSIONS = ['extra', 'syntaxhighlighter']
where `syntaxhighlighter` is a custom markdown extension I am working on to use
syntax highlighter instead of pygments for code highlighting.
- if the output directory does not exist the 'make clean' command fails,
which also means that the 'make html' command which would otherwise
create the output directory also fails without generating the output
Quick fix for this traceback:
$ pelican-import --wpfile ~/Downloads/mysite.wordpress.2013-02-24.xml
Traceback (most recent call last):
File "/Users/me/.virtualenvs/pelican/bin/pelican-import", line 8, in <module>
load_entry_point('pelican==3.2', 'console_scripts', 'pelican-import')()
File "/Users/me/.virtualenvs/pelican/src/pelican/pelican/tools/pelican_import.py", line 363, in main
disable_slugs=args.disable_slugs or False)
File "/Users/me/.virtualenvs/pelican/src/pelican/pelican/tools/pelican_import.py", line 238, in fields2pelican
for title, content, filename, date, author, categories, tags, in_markup in fields:
File "/Users/me/.virtualenvs/pelican/src/pelican/pelican/tools/pelican_import.py", line 37, in wp2fields
if item.fetch('wp:status')[0].contents[0] == "publish":
TypeError: 'NoneType' object is not callable
I'm a BeautifulSoup novice but these changes allowed me to import two of my wordpress.xml files.
After waiting for pelican and server to come up, if either one has
died then give a more helpful message and clean up. Previously did not
check for this, so script informed user that everything was running even
if one or both parts failed for whatever reason.
This is meant to provide a little more user-friendliness
in those cases where user has a develop_server.sh in project directory
but forgot to (re)install pelican, activate relevant virtualenv, etc.
as well as other unforeseen situations where one of the processes does
not start.
I wrote a plugin that I'd like to also be able to run on pages in
addition to articles. Adding this signal will let me update the content
when a page is finished being generated.