If someone needs a test-case to examine different solutions, the above example is accessible via this link. But nevertheless, I will also be glad to know about a working solution that requires the source encoding to be provided. One important aspect of being "simple" is that the user is not required to determine the source encoding rather the source encoding should be automatically detected by the tool and only the target encoding should be provided by the user. I'm using Ubuntu-14.04 and I'm looking for a simple solution (either GUI or CLI) that works just as Notepad++ does. $ iconv -f WINDOWS-1252 -t UTF-8 file.txt > out.txt $ iconv -f ISO8859-15 -t UTF-8 file.txt > out.txt I've tested many commands, including the followings, and all have failed: $ recode ISO-8859-15.UTF8 file.txt Most of all, I've seen people suggest iconv and recode but I have had no luck with these tools. I've searched a lot for a similar solution on GNU/Linux, but unfortunately the suggested solutions (e.g this question) don't work. In Windows, one can fix this easily using Notepad++ to convert the encoding to UTF-8, like below:Īnd the correct readable result is like this: These files are created on Windows, and saved with an unsuitable encoding (seems to be ANSI), which looks gibberish and unreadable, like this: I frequently encounter text files (such as subtitle files in my native language, Persian) with character encoding problems.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |