1.  I want to extract a lot of useful imaging photos from a Powerpoint slide deck, and the usual way to select each photo and save them has two distinct disadvantages:

    • time consuming and repetitive
    • photos are not saved as the original quality

    Modern Powerpoint files with the .pptx extension are actually zip archives. Rename the .pptx file into .zip file, and look into the media folder. 

      

    If it's an older .ppt Powerpoint file, first open up the file in Powerpoint, then save as .pptx

     

    Using Windows Explorer, browse to the location of the Powerpoint file, and rename the .pptx file to .zip

     


     Using Windows Explorer or a File archiver utility such as PeaZip, extract the content of the newly named .zip file

      You should now see a ppt folder, with a media subfolder containing all the photos in the slides. 



    0

    Add a comment

  2. Dear Manager,


    I hereby tender my resignation for OrgX effective on 17th May 2020.

    It has been my great pleasure to serve in OrgX for a year. Since day 1, I realized that OrgX is different from other companies. This is a product-focused organization that values engineering, rather than trying to please Wall Street at all cost every quarter. There is great camaraderie and belief that we will continue to lead the industry.

    However, when I look back at my own contribution to OrgX and to the tech industry at the whole, I realize that I have failed to make any positive contributions. Whilst I am a valued team member and perform well according to our KPI, I have not been able to make any significant contribution to increase our effectiveness and productivity by pushing the boundary of what is possible.

    In short, I have become a cog in the wheel treating symptoms rather than using my ability to think and solve problems faced both by OrgX and others in my industry.

    To carry on means I am settling. That I am in Day Two. I am no longer the highly motivated engineer who wants to create the best products for the world, I no longer care about taking away friction in my colleagues' day-to-day work. I no longer dare to crest the next wave and clamber the next mountain, and would rather cross my arms and look back in satisfaction.

    A Ship in Harbor Is Safe, But that Is Not What Ships Are Built For.

    I sincerely hope that whomever is replacing my role will have the vision and drive to bring OrgX to greater heights.


    Best regards,
    Lee Hanxue

    I write this resignation letter not because I plan to quit from my current organization in one year's time. Neither am I writing this letter to get some negotiating leverage. I write it because a couple of years ago, I realized that life is finite. Anything broken can be fixed, but time wasted cannot be regained. I have never heard of anyone on their death bed regretting not doubling their net worth. But I have always heard of regrets of not taking the shot, giving up their childhood or lifelong dreams, not being kinder, not spending more time with their loved ones.

    I do not want to work just to pay bills, or to keep up with the Jones-es. There are many options in life to pay bills, to feed the family and so on. And I do not want to squander the limited time I have to please my bosses and wait for my monthly paychecks.

    Even if I fail, I would have the satisfaction of putting all my heart and soul to solve problems in my industry, and make the world just that little bit better.

    These do not need to be novel creations but could be as mundane as:
    • shortening the product life-cycle by 30%
    • implementing a continuous delivery pipeline for AI inferencing solutions
    • preventing certain types of failures from occurring
    • creating a higher level of abstraction for engineering teams, resulting in >10% increase in productivity
    • implementing AI inferencing framework at hardware level (e.g. Tensorflow on FPGA)
    • implementing TFX
    • higher AI workflow utilization (e.g. slurm)

    I had the opportunity to work in one of the best company in the world. My regret was not about quitting my job, but not quitting early because as good as the company was overall, business changes means there are no longer opportunities for me to push the boundary of technology to increase more business value. I do not wish to repeat the same mistake twice.
    13

    View comments

  3. If you work in a large enterprise or use chrome in a school, chances are Google Chrome access the Internet through a proxy server.


    What is a proxy server? It is simply an intermediary between two machines that want to communicate with each other: Google Chrome on your laptop and the web server hosting websites on the Internet. The more important question is, why use a proxy server? Some of the reasons are:

    • Security: only safe websites are allowed and unsafe or inappropriate websites are blocked
    • Limit access: only devices that are allowed to have Internet access can connect to the proxy server
    • Accountability: know who is accessing what resources
    • Speed: proxy servers used in conjunction with caching server can speed up access

    Chrome Proxy Settings

    Once upon a time, setting proxy server settings is straightforward. For example, in Internet explorer, go to Internet Options and insert the proxy address.


    This used to be the case for Chrome, until it changed to using the Operating Systems' proxy settings


    So Chrome does not automatically set the proxy, but you use your Operating System's proxy settings.


    Windows Proxy Settings

    Windows 10 has a new proxy Settings.


    But the legacy Internet Options settings is still valid - check that nothing is set there.


    Windows also reads the HTTP_PROXY and HTTPS_PROXY environment variable. If you find that you have no proxy settings enabled but Chrome is still acting weird, check for these environment variables.




    macOS Proxy Settings

    You can set proxy server details in System Preferences


    You can set the proxy settings from command line too

    
    networksetup -setwebproxy "Wi-fi" 127.0.0.1 8080
    networksetup -setwebproxystate "Wi-fi" on
    networksetup -getwebproxy "Wi-Fi"
    
    
    
    

    And for SOCKS proxy

    
    networksetup -setsocksfirewallproxy wi-fi localhost 1080
    networksetup -setsocksfirewallproxystate "Wi-Fi" on
    networksetup -getsocksfirewallproxystate "Wi-Fi"
    
    
    


    Linux Proxy Settings

    The best way to set proxy is to use environment variables, so that the settings is applied to all applications, including those started from the command line

    
    http_proxy=http://myproxy.server.com:8080/
    https_proxy=http://myproxy.server.com:8080/
    all_proxy=http://myproxy.server.com:8080/
    no_proxy="localhost,127.0.0.1,localaddress,.localdomain.com"
    
    
    

    GNOME Proxy Settings

    If you are using a Linux distribution that is running GNOME (for example: Ubuntu, CentOS and RHEL), the proxy settings is controlled from gsettings. You will need to use these commands

    
    gsettings set org.gnome.system.proxy.http host "myproxy.server.com"
    gsettings set org.gnome.system.proxy.http port "3128"
    
    
    


    SOCKS vs HTTP

    HTTP proxies are clear enough: you want to access websites over http/https, and HTTP proxy servers simply relay the requests across. But it also means you need to make changes to firewall rules to open up a new port for the proxy server.

    The SOCKS protocol is designed to work at a lower level, without opening additional ports. For example, ssh natively support SOCKS for port forwarding, i.e. SSH tunneling.

    The tricky bit is how to set SOCKS proxy. Instead of

    
    http_proxy=http://myproxy.server.com:8080/
    
    
    

    append socks5 in front of the SOCKS server address

    
    http_proxy=socks://127.0.0.1:1080
    
    
    

    Likewise in Windows 10 proxy settings, add the prefix socks=




    Proxychains

    What if the application you are using does not work with SOCKS protocol? You can use Proxychains to redirect the remote SOCKS/HTTP proxy to your application. For example

    
    $ proxychains4 brew install zsh
    [proxychains] config file found: /home/hanxuel.linuxbrew/etc/proxychains.conf
    
    
    


    More details in this article.


    Proxy Chaining

    Perhaps the remote proxy uses PAC / WPAD to configure a list of URLS. Or the remote proxy server is very simple, and you want to have fine-grained control over what URLs to go through the proxy server, and what URLs to access directly. pacproxy is the perfect tool.


    Start Chrome with Command Line Options

    You can also explicitly tell Chrome to use proxy settings by starting Chrome from the command line.  For example:

    google-chrome --proxy-server=socks://127.0.0.1:9999

    Note that Chrome ignores any local host inclusion for using proxy for security reasons. Explained in more detail here. If you must include local addresses, use the <-loopback> option

    
    google-chrome --proxy-server=socks://10.10.110.28:8080 --proxy-bypass-list=<-loopback>
    
    
    








    1

    View comments

  4. We want to convert PDF to Powerpoint slides





    But there is no direct way to convert between the two formats. So let's break it down into an intermediary step: convert each page in the PDF document into JPG, then import the JPGs into Powerpoint.










    Convert PDF to JPG using Ghostscript

    Ghostscript is the bet tool for converting from PDF or PS into other formats. ImageMagick is the more popular tool, but for PDF conversion it uses Ghostscript in the background.


    Before conversion, you need to know a couple of things:
    • The actual density or dpi for the PDF document
    • Quality vs size for the generated image
    This StackOverflow answer, especially the top voted comment has a good explanation on  density and quality. Here is a straightforward Ghostscript command for the conversion assuming 200dpi and 100% JPEG quality:


    
    gs -dNOPAUSE -sDEVICE=jpeg -r200 -dJPEGQ=100 -sOutputFile=document-%02d.jpg "The_Artificial_Intelligence_Crush_2018.pdf" -dBATCH
    
    
    


    I wrote about ImageMagick image quality when converting from PDF.

    Convert PDF to JPG using ImageMagick

     Use ImageMagick (with Ghostscript installed) for a more user-friendly format, especially for tuning the quality


    
    magick -density 200 -colorspace sRGB "The_Artificial_Intelligence_Crush_2018.pdf" -flatten output-%02d.jpg
    
    
    



    Use convert instead of magick on Linux and macOS.





    Import Images Into Powerpoint

    Now that you have the images ready, let's open a new Presentation in Powerpoint, and Insert Photo Album


    Select all the images and click Create



    Tada! Your PDF document is now a Powerpoint presentation.




    How About macOS?

    If you are using macOS, there is good news and bad news. The good news is exporting images from PDF is a breeze using Preview. View Thumbnails in Preview, or press + + 1. Then select all the images and File → Export. The Preview User Guide has a section titled Extract a PDF page as an image.

    The bad news is Powerpoint for Mac still does not have the Insert Photo Album functionality. You can either use Automator, or create the slides in Keynote, then export to PPT.




    1

    View comments

  5. With Vuepress is you can start creating content in Markdown out of the box. And Markdown has tons of useful plugin. When I was updating the Wujiquan website, I wanted to embed videos into the content. While it is possible to add html tags directly within markdown, I want to keep the content clean and simple.

    markdown-it-html5-embed

    Vuepress supports markdown-it plugins, and https://github.com/cmrd-senya/markdown-it-html5-embed comes to the rescue! Adding


    
    ![6 healing sounds video](https://wujiquan.sgp1.digitaloceanspaces.com/Qigong/Wujiquan-six-healing-sounds.mp4)
    
    
    

    to the page will render the following in HTML

    
    &lt;source type=&quot;video/mp4&quot; src=&quot;https://wujiquan.sgp1.digitaloceanspaces.com/Qigong/Wujiquan-six-healing-sounds.mp4&quot;&gt;
    Your browser does not support playing HTML5 video. You can &lt;a href=&quot;https://wujiquan.sgp1.digitaloceanspaces.com/Qigong/Wujiquan-six-healing-sounds.mp4&quot; download=&quot;&quot;&gt;download a copy of the video file&lt;/a&gt; instead.
    Here is a description of the content: 6 healing sounds video
    
    
    



    Adding Markdown Plugin to Vuepress

     To add the markdown-it-html5-embed plugin, first add the package to your vuepress project

    
    yarn add markdown-it-html5-embed
    
    
    

    Then edit your config.js and add the following code

    
    module.exports = {
      markdown: {
        extendMarkdown: md => {
          md.use(require('markdown-it-html5-embed'), {
            html5embed: {
              useImageSyntax: true,
              useLinkSyntax: false
            }
          })
        }
      },
    
    
    

    For more information on the syntax, refer to the markdown-it-html5-embed documentation.


    2

    View comments

  6. Visual Studio Code's default Shell in Windows is PowerShell


    There are better alternatives to PowerShell or cmd.exe in Windows. My favourite is Cmder.

    Presumably you already have Cmder installed in your system, in a path without space, for example C:\Tools\Cmder. Chocolatey's Cmder installation works perfectly fine. 

    Change Default Shell

    To change Visual Studio Code's default shell, go to Preferences and edit settings.json. Add the following lines, changing the path where Cmder is installed


    "terminal.integrated.shell.windows": "cmd.exe",
    "terminal.integrated.env.windows": {
    "CMDER_ROOT": "C:\\tools\\Cmder"
    },
    "terminal.integrated.shellArgs.windows": [
    "/k C:\\tools\\Cmder\\vendor\\init.bat"
    ],

    Important: you need to set cmd.exe. Setting Cmder.exe will not work

    Now you can have all the goodness of Cmder within vscode, such as built-in git integration.


    PowerShell in Cmder

    You can also change your default shell from the cmd.exe/bash default to PowerShell within Cmder. Use the following settings.json in vscode

    "terminal.integrated.shell.windows": "C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\powershell.exe",
    "terminal.integrated.env.windows": {
    "CMDER_ROOT": "C:\\tools\\Cmder"
    },
    "terminal.integrated.shellArgs.windows": [
    "-ExecutionPolicy", "Bypass",
    "-NoLogo", "-NoProfile", "-NoExit",
    "-Command", ". 'C:\\tools\\cmder\\vendor\\conemu-maximus5\\\\..\\\\profile.ps1'"
    ],






    0

    Add a comment

  7. It's been a while since I have worked on open source projects. When I logged on to my project's Github page, I was pleasantly surprised to see this warning:


    A bit of Googling showed that Github has started to automatically scan for outdated dependencies with security vulnerabilities.Clicking on the alert will show which packages that are impacted.


    Github also helpfully show exactly what are the vulnerabilities





    Automated Security Fix

    What's even better is the automated security fix feature. You will need to turn it on for your repository though.


    If you choose not to have automated fixes, you still can manually trigger the fix by using the Create automated security fix functionality.


    A pull request will be automatically generated.



    While it is nice for Github to generate security fixes, we should not be depending on 3rd parties for our project's security. Use a tool such as yarn audit to keep your project constantly up to date and secure. 
    0

    Add a comment

  8. How do you check for the number of CPUs and cores? From /proc/cpuinfo?

    
    $ grep -c ^processor /proc/cpuinfo
    56
    $ cat /proc/cpuinfo | awk '/^processor/{print $3}' | wc -l
    56
    
    
    

    They will return twice the number if hyperthreading is enabled. How about using nproc?

    
    $ nproc --all
    56
    
    
    

    Still the same. Not accurate if hyperthreading is enabled.

    getconf

    The getconf command works great because it is available on both Linux and MacOS.

    
    $ getconf _NPROCESSORS_ONLN
    56
    
    
    

    Unfortunately it only returns the number of virtual cores.

    lscpu and sysctl

    Another problem is the output from /proc/cpuinfo is meant to be read by humans, and hence may change across version. Not suitable for parsing and using it in scripts.

    You should be using lscpu instead. Or sysctl on MacOS. This script from Stackoverflow by the user mklement0 counts the number of logical and physical CPU

    
    #!/bin/sh
    
    # macOS:           Use `sysctl -n hw.*cpu_max`, which returns the values of 
    #                  interest directly.
    #                  CAVEAT: Using the "_max" key suffixes means that the *maximum*
    #                          available number of CPUs is reported, whereas the
    #                          current power-management mode could make *fewer* CPUs 
    #                          available; dropping the "_max" suffix would report the
    #                          number of *currently* available ones; see [1] below
    
    #
    # Linux:           Parse output from `lscpu -p`, where each output line represents
    #                  a distinct (logical) CPU.
    #                  Note: Newer versions of `lscpu` support more flexible output
    #                        formats, but we stick with the parseable legacy format 
    
    #                        generated by `-p` to support older distros, too.
    #                        `-p` reports *online* CPUs only - i.e., on hot-pluggable 
    #                        systems, currently disabled (offline) CPUs are NOT
    #                        reported.
    
    # Number of LOGICAL CPUs (includes those reported by hyper-threading cores)
      # Linux: Simply count the number of (non-comment) output lines from `lscpu -p`, 
      # which tells us the number of *logical* CPUs.
    logicalCpuCount=$([ $(uname) = 'Darwin' ] && 
                           sysctl -n hw.logicalcpu_max || 
                           lscpu -p | egrep -v '^#' | wc -l)
    
    
    # Number of PHYSICAL CPUs (cores).
      # Linux: The 2nd column contains the core ID, with each core ID having 1 or
      #        - in the case of hyperthreading - more logical CPUs.
      #        Counting the *unique* cores across lines tells us the
      #        number of *physical* CPUs (cores).
    physicalCpuCount=$([ $(uname) = 'Darwin' ] && 
                           sysctl -n hw.physicalcpu_max ||
                           lscpu -p | egrep -v '^#' | sort -u -t, -k 2,4 | wc -l)
    
    # Print the values.
    cat <<EOF
    # of logical CPUs:  $logicalCpuCount
    # of physical CPUS: $physicalCpuCount
    EOF
    
    
    





    0

    Add a comment

  9. Linux does not allow for user processes to listen to a port below 1024. This is by design. Specifically in the file net/ipv4/af_inet.c , these lines perform the check



    
    if (snum && snum < PROT_SOCK && !capable(CAP_NET_BIND_SERVICE))
                  goto out;
    
    
    


    The security reasoning is misguided in 2019. Malware writers and other bad actors cannot care less what port your process is running on: as long as they can take control of the process and your process has a lot of privileges, then your system will be compromised. But I digress.



    setcap

    The best method to overcome this limitation is to use the setcap command



    
    sudo setcap 'cap_net_bind_service=+ep' /scratch/spiped
    
    
    


    Note that sudo or root permission is required when running setcap or else you will see this message



    
    $ setcap 'cap_net_bind_service=+ep' /scratch/spiped
    unable to set CAP_SETFCAP effective capability: Operation not permitted
    
    
    

    Due to setcap's limitation, the command can only work on a local filesystem. When I run the command on my NFS-mounted home directory, I see this error


    
    $ sudo setcap 'cap_net_bind_service=+ep' ~/bin/spiped
    Password: 
    Failed to set capabilities on file `/home/hanxue/bin/spiped' (Operation not supported)
    
    
    


    For more information, refer to the setcap manual.


    iptables

    You can re-route local connections from one port to another port. Note that the following will NOT work for local connections

    
    iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 25 -j REDIRECT --to-port 8025
    
    
    
    You need PREROUTING for local connections

    
    iptables -A PREROUTING -t nat -p tcp --dport 80 -j REDIRECT --to-port 8080
    iptables -A OUTPUT -t nat -p tcp --dport 80 -j REDIRECT --to-port 8080
    
    
    




    SSH Tunnel

    This sounds a bit silly, but you can open an SSH connection to the same machine, routing the port


    
    sudo ssh localhost -L 25:localhost:8025 -N
    
    
    


    authbind

    Alternatively, use authbind. Configure port 25 for access to all users


    
    sudo touch /etc/authbind/byport/25
    sudo chmod 777 /etc/authbind/byport/25
    
    
    


    Then run the "authbind" command


    
    authbind --deep /scratch/spiped --with --your --arguments
    
    
    




    Hack the Linux Kernel

    That's the advantage of working with open source software; when all easy options fails, you can edit the source code. 


    Remove the 1024 port restriction and change it to 0


    
    # /usr/src/linux-[version_number]/include/net/sock.h:
    /* Sockets 0-1023 can't be bound to unless you are superuser */
    #define PROT_SOCK       1024
    
    
    




    Or in the aforementioned net/ipv4/af_inet.c file


    
    #if (snum && snum < PROT_SOCK && !capable(CAP_NET_BIND_SERVICE))
    #              goto out;
    
    
    
    


    This really should be the last resort: who knows if there are any sloppy developers that count on listening to ports below 1024 to fail for their application to work ;-)

    By the way, I was trying to relay mail delivery from one server to another server using spiped





    0

    Add a comment

  10. Google used to show the days and times when I most frequently use Search. I like to be able to see when I am most active and what are my usage trends over time. Unfortunately the feature has been taken away.

    History Trends Unlimited

    Thankfully there is History Trends Unlimited Chrome extension. It will automatically pull your full Chrome history and present them in an informative manner.


    I can see roughly when are the times when I am at work, and what are the top sites that I visit. 

    Unlimited History

    Chrome's history of the sites you have visited will only be stored for 90 days. Beyond that the older data is deleted. What's great with this extension is it will keep your history beyond 90 days - henec the name "Unlimited". 

    Export History

    In case you need to export your complete browser history, the extension let's you do that. You can also automatically back up your history in a fixed interval. 

    History Unlimited options
    Feel free to explore the export options - fantastic to perform analysis on your web browsing behaviour.

    Link to Chrome Web Store

    0

    Add a comment

Subscribe
Subscribe
Loading