GitHub - mongodb/js-bson: BSON Parser for node and browser

Release: Tdarr Beta v1.102 release [18th Jan 2020]:

https://github.com/HaveAGitGat/Tdarreleases/tag/v1.102-Beta

Changes:
Beta v1.102 release [18th Jan 2020]: Changes: -[New] Plugin creator option (Filter by age) - select 'Date created' or 'Date modified' -[New] Plugin creator option (Filter by age) - include files OLDER than specified time -[New] Options to sort queue by date (Scanned, Created, Modified) -[Fix] Audio file codec not showing in search results -[Fix] MJPEG video incorrectly tagged as audio file -[Fix] Default plugin priority -[Fix] 'Too many packets buffered for output stream' when health checking -[Fix] Folder path placeholder text

Previous changes:
Beta v1.101 release [6th Dec 19]: Changes: -[New] Force processing of files -[New] Action: HandBrake basic options -[New] Action: Add one audio stream -[New] Action: Keep one audio stream -[New] Action: Standardise audio stream codecs -[New] Channel count now shown in streams table -[Fix] Rare search result bug (no results shown) -[Fix] Audio files with cover art being detected as video
Alpha v1.008 release [1st Dec 19]: Changes: -[New] Plugin creator UI and groundwork for future Filters and Actions. Filters now encapsulate Action taken. No separate Filter needed -[New] Re-order streams plugin added by default for new libraries -[New] Backup and restore feature (scheduled midnight backup) -[New] Toggle copying to output folder if file already meets conditions -[Improvement] Change to how plugins are imported. Built-in NodeJS modules can now be used when creating plugins. (Can use e.g. require('fs') etc) -[Improvement] Idle CPU usage drastically reduced -[Improvement] Various stability fixes -[Improvement] Confirmation needed when restoring from backup -[Fix] Video resolution boundaries improved -[Fix] Non existent files + junk removed when running Find-New scan -[Fix] Corrected error when creating remux container plugin -[Fix] If one plugin has an error, the rest will still load -[Fix] Auto cache cleaner disabled due to issues on some systems -[Fix] Move item to Transcode:Error instead of Transcode:Not required if error with plugin
Alpha v1.007 release [22nd Nov 19]: Changes: -[New] Option to enable Linux FFmpeg NVENC binary (3.4.5 for unRAID compatibility) -[New] Option to ignore source sub-folders -[New] Skip health check button -[New] Option to change visible queue length -[New] Option to duplicate library -[New] Customise search result columns -[New] UI improvements (@jono) -[New] Option to delete source file when using folder to folder conversion. -[New] Community plugins (Remove commentary tracks etc) -[New] Option to delete local plugins -[New] Auto clean cache folder + preventing non-Tdarr cache files being deleted in case of incorrect mapping. -[Fix] Reset processing status of all files on startup so no files stuck in limbo -[Fix] Transcode pie showing incorrect data -[Fix] Folder watcher will now wait longer to detect if a new file has finished copying -[Fix] Folder to folder conversion: Files which already meet requirements will be copied to output folder -[Fix] Folder to folder conversion: Cache/Output folder bug -[Fix] Default containers to scan for now include ts/m2ts -[Fix] Keep all stream types when using remux plugin creator -[Fix] Prevent too many workers occassionally starting -[Fix] Newly transcoded files will be bumped correctly to top of queue when sorting by size -[Fix] Closed caption scanning now much faster & accurate (even on empty captions) -[Fix] Plugin creator plugin path error -[Fix] Health check error when using FFmpeg hardware transcoding
Alpha v1.006 release [9th Nov 19]: Changes: -[New] NVENC for FFmpeg enabled (linux + tdarr_aio) -[New] Per library stat breakdown -[New] Plugin creator -[New] Plugin creator option - Filter by codec -[New] Plugin creator option - Filter by date -[New] Plugin creator option - Filter by medium -[New] Plugin creator option - Filter by size -[New] Plugin creator option - Filter by resolution -[New] Plugin creator option - Transcode -[New] Plugin creator option - Remux container -[New] Option to detect closed captions (linux + tdarr_aio + windows) -[New] Community plugin - remove closed captions -[New] Configurable plugin stack (mix local and community plugins, re-order etc) -[New] Folder to folder conversion (feedback needed, test first) -[New] Skip transcoding button -[New] Options tab - set base URL -[New] Remove item from library button -[New] Exclude codec whitelist/blacklist -[New] Navgiation bar UI -[New] Queue library alternation option -[Fix] 'Re-queue' buttons on 'Tdarr' tab -[Fix] Prevent find-new/fresh scans occuring on a library at the same time. Hourly find-new scan re-enabled for libraries with folder watch ON. -[Fix] Library prioritisation sort -[Fix] Reduced search result number for quicker render + UI changes
Alpha v1.005 release [1st Nov 19]: Changes: -[New] UI overhaul (Dark theme) -[New] Hardware transcoding using tdarr_aio container + HandBrake -[New] Improved bump-file system -[New] Improved plugin/transcode formatting -[New] File history -[New] Search tab shows queue position, streams, file history + more -[New] Sort and filter search results -[New] Prioritise libraries -[New] Sort queues by size/date created -[New] Full file path shown on workers -[New] Total file count shown when files being scanned/processed -[New] Search local plugins -[New] Set base URL with env variable -[New] Requeue-all buttons added to Tdarr tab -[Fix] Library requeue buttons requeue only specified library -[Fix] Ubuntu container permissions -[Fix] File scanner logs -[Fix] Video height boundaries reduced for 720p,180p etc -[Fix] Bigger font through-out
Alpha v1.004 release [23rd Oct 19]: Changes: -Scan on start switch added -Prevent Tdarr temp output files mistakenly being scanned -Docker memory fix for large libraries (30,000+ files) -Improved garbage collection -Temp scanner data written inside container (Should fix permission issues with host) -tdarr_aio (All-in-one) Ubuntu container now available with MongoDB inside container.
Alpha v1.003 release [10th Oct 19]: Changes: -Workers now show more detailed information: ETA, CLI type, preset, process reasons, start time, duration and original file size. -Help links updated -Improvements to FFmpeg percentages -Switch to turn processing of a library On/Off -Can now click on Pie-chart segments to see files in those segments -'Not attempted' items renamed to 'Queued'. 'Transcode:Passed' items renamed to 'Transcode:Not required' -Status tables have been put into tabs. Each tab shows the number of related items on it (e.g. No. items in queue.) Additional information added to items (codec,resolution etc) -Date time stamp now shown on processed items. Old/New size now shown for transcode items. -HandBrake and FFmpeg terminal implemented on the 'Help' tab. This is mainly so you can see documentation such as what encoder types are enabled etc but any HandBrake/FFmpeg commands can be put into the terminal. -Create a sample button added to items in Search results. Clicking the button will create a 30 second sample of the selected file and output it in the new 'Samples' folder in the Tdarr data folder. Use the sample to test plugins/transcode settings and to help when reporting bugs. -Additional schedule buttons added so you can bulk change daily hour slots. -Reduced 720p boundaries so now 960*720 video files will show up in the 720p category instead of just 1280*720 files.
submitted by HaveAGitGat to Tdarr [link] [comments]

I built a 100% open-source hosting platform for JavaScript microservices and webhooks, in Javascript. Ask me anything! Architectural write-up included.

Hello. I built a 100% open-source hosting platform for JavaScript microservices, in Javascript. Ask me anything!
The project: http://hook.io
The source code: http://github.com/bigcompany/hook.io
Built with: Node.js, CouchDB, and Github Gist. Node Package Manager modules are fully supported.
Architectural details can be found a bit further down.
Interested, but too busy to read this now?
If you'd like, you can run the following Curl command to opt-in to our mailing list. We'll periodically send you updates about the project.
curl [email protected]
Replace [email protected] with your email address.
What is the purpose of hook.io?
hook.io is an open-source hosting platform for webhooks and microservices. The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms. hook.io provides an easy way to create, host, and share microservices. Through developing many small re-usable microservices, you can reduce the complexity of your applications while improving stability.
Why or how would I want to use hook.io?
You should want to use hook.io if it can make your life as a developer easier.
The most basic use-case for hook.io is quick and free webhook hosting. You can instantly create a simple hook which parses the incoming parameters of an HTTP request and performs arbitrary actions on it. For instance: Send an SMS message every-time the Hook is requested as a webpage. Since NPM is supported, you can re-use any existing library from the extensive NPM module repository. You can also configure Hooks to be executed on a schedule using a Cron pattern.
At this point, we will take note that Hooks are fully streaming. Inside your Hook source code you have direct access to Node's http.IncomingMessage and httpServer.ServerResponse request and response streams. This means you can treat the inside of a Hook the exact same way as if it were inside a streaming middleware in a regular node http server. Having direct access to these streams is extremely useful and I am unsure if any other microservice hosting providers currently offer this feature.
More advanced use-cases for hook.io would be replacing individual parts of your application with microservices. Instead of adding a new route or module to your application , you could instead create a Hook responsible for only one unit of functionality and call it using a regular HTTP request from inside your existing application. One specific example could be building a Hook with a custom theme which acts perfectly as a stand-alone sign-up form. This sign-up form can then be loaded server-side in your application using one HTTP get request. It might sound complicated at first, but integrating microservices with your existing application is actually very easy. In the upcoming weeks we'll work on releasing specific guides for separating application functionalities into microservices.
An even more advanced usage would be building a suite of Hooks and composing them to create new and unique applications! Since every Hook understands Standard In and Standard Out and Hooks can easily call other Hooks from inside each other, there are an endless amount of combinations to be made. This composability enables the foundation for Flow-based Programming without imposing any specific rules for composition. A specific example could be building a Hook ( called "tar" ) responsible for taking in STDIN and streaming out a compressed tar file. Once this Hook is created, you could easily pipe the results of another Hook ( such as an image downloader ) into the "tar" Hook. These Hooks don't exist yet, but I am certain someone will build them in the near future.
Unix Pipes!
hook.io is very friendly with Unix Pipes. Using STDOUT and STDIN you can connect hook.io to your existing Unix Tool chain. The best way to explain this concept is to review the Curl examples.
Here is one specific example of using hook.io to flip a cat upside-down with cat and curl. You will need to provide your own cat.png
cat cat.png | curl -F 'degrees=180' -F '[email protected];type=image/png' http://hook.io/Marak/image/rotate > upsidedown-cat.png
The Data!
If you noticed in the last example, hook.io is fully capable of streaming binary data. It also supports streaming file uploads, multipart form uploads, and will assist in parsing all incoming form fields, JSON, and query string data.
Software Architecture
The core software architecture of hook.io is Resource-View-Presenter ( RVP ).
Resources are created using the npm resource module.
View-Presenters are created using the npm view module with regular HTML, CSS, and JavaScript. The same View-Presenter pattern is also used to implement custom theming for Hooks see: hook.io/themes
Important dependencies
mschema - Provides validation through-out the entire stack.
big - Small application framework. Provides website app which hook.io extends.
resource-http - Provides core HTTP server API. Helps in configuring Express with middlewares like Passport
resource-mesh - Provides a distributed event emitter mesh using a star network topography. hook.io primarily uses this module as a monitoring agent to report status back to our monitoring sink.
resource-user - Provides basic user API ( signups / logins / encrypted passwords / password resets / etc )
Server Architecture
There is one front-facing HTTP server and any number of Hook Workers.
The front-facing server is responsible for serving static content, maintaining user session data, and piping requests between the client and Worker.
Workers are responsible for executing user-submitted source code and piping their responses through the front-facing server to the client.
At this point, we will take note that communication between the Hook and client remains streaming throughout the entire architecture. This gives hook.io the ability to perform complex tasks like transcoding large video streams without worrying about clogging up any parts of the system with large memory buffers.
Hook Servers and Hook Workers are immutable and stateless to ensure stability of the platform. They are designed to fail fast and restart fast. mon is used as a process supervisor.
This architecture can theoretically scale to upwards of 10,000 concurrent connections. Realistically, it will probably be closer to 4,000. When the site needs to scale past this, we will create several front-facing servers and load balance incoming HTTP requests to them using DNS.
Hook and User configuration data are stored in a CouchDB database. If the database grows too large, we will split it into several smaller database severs sharded by the first alphabetic letter of every document's primary key.
Source code for Hooks is currently stored on Github as Github Gists. I'd imagine sometime in the future we will add the option to store and edit source code directly on hook.io itself. The project is open-source, so you could be the first to open up the issue!
Questions? Comments? Feedback?
Let me know! Open-source projects get better with collaboration. Every comment and piece of feedback counts.
Maybe take five minutes to try the platform out? You might like it!
The dependency tree for hook.io is re-used in many applications. Several of these dependencies I maintain myself. If you have feedback or comments about any specific dependency let me know!
submitted by _Marak_ to javascript [link] [comments]

Jack of all trades, Master of none tips?

TL;DR: I had no idea that it'd be this long or that it took 4 hours to type(ADD for you :P). Below is last 4 years journey into programming as a profession with a bunch of unlucky breaks along the way. I've developed a wide range(mostly by circumstance rather than choice) of dev/IT skills but none of them feel strong enough for me to land a regular dev/IT job. I lack relevant degree and the roles that I do get pay min wage for me to do a vast amount of things on my own, I'm in my thirties and want to change that, advice?
Adobe AIFlash - ActionScript on mobile platforms Bit of a background. Sunk myself into a bunch of debt studying for something I was interested(art/design related) in but didn't really have much work opportunities in my country at the time. After graduating(2012) and job hunting for about 8 months, my job at the time(4 years in), I got a call from Learning and Development department about a small flash game I made when I started at the company(I had been doing light programming with flash mostly as a hobby since 2004). So they brought me on to do an e-learning project, I had been playing with Starling(GPU accelerated) which made Adobe AIR on mobiles actually viable. Over the span of a year I learnt a bunch and optimized the app pretty well, going as far as to improve the bitmap font renderer to chunk paragraph of texts effectively when scrolling(ok, performance with flash was still pretty poor in some areas even with the GPU :p), I had tried to push heavy text content to be delivered by PDFs but management was against it. That work benefited since I modified the parser to include symbols/images or interactive content within the text document(all loaded externally for easy edits without compilation by staff, built a basic template engine with some pooling). Anyway, this was all under the table and once it was done to the point of proving itself as successful in field tests I was given the boot ha.
Decided if I can't get hired for what I'm qualified in, I'll get out of my job(actual one without the flash dev) and make my hobby into a career. Welp, thanks to Steve Jobs flash dev demand had really gone downhill(probably a good thing for me).
Exploring new territory - Software Testing Got into a short course that was on software testing, it was poorly taught, I was confused about one thing and told I was wrong, mocked by tutor and students, later to find out through leading industry professionals that were regulars in a google group, that I was actually the one who was right.... That course then had some "test" with little information to assist us that you'd expect on the job(couldn't get answers to any basic questions) to test and find bugs on some poor quality foreign website(supposedly actual free/paid work for the tutors clients/friends). It had gotten to a point I saw no value in it, left the course to teach myself online. Passed the ISTQB exam, went to meetup groups, wasn't really digging the career path(job hunt wise, there wasn't much opportunities based on the piece of paper and basic skills I had, most were after mid/senior or some other complimentary skill). During meetups automated testing sounded interesting though. Overall course was not worth 2k it was charging(didn't pay this though as was government funded initiative due to shortage in software testers....right).
Exploring new territory - C# Tried another short course, this one offered free entry with basic programming skills on the basis you'd make payments towards $5k once you secured a job through the education provider. Sounded great, time to learn C#! Again ran into some quality issues, the tutor who was a Senior dev made multiple poor choices that I questioned, I lost interest/faith once we got to the "group" project at the end, I didn't participate beyond reviewing the code by students and advising on where things could be improved. There was a voluntary project for a real company doing a presentation at a TEDx event, it even had a small 3 figure payment on completion. Three other students also volunteered and I was made lead. I had never touched HTML/JS/CSS before beyond viewing source of pages when I was young. The project involved tweening tweets on a big display in realtime and doing some colour transitions(radial gradient BG) based on colour associated to the hashtag paired with the event tag, and a few other features. I had a good idea how to handle that and confident due to my design studies so took the role none of the other devs wanted, assigned two to the backend and another to a simple bootstrap UI for the iPad(feature dropped as they were unable to deliver declining any help I offered). One of the backend devs did a PHP implementation but dropped out for personal reasons halfway through, the other student tried their best and got some assistance via the tutor to reimplement in C# with SQL(we had to support replays with a timescale for speakers to see how their talk was received.), I assisted where I could but was rather swamped on my end with several unexpected issues that cropped up. I ended up pulling an 80 hour week including being flown to the city the event was held at to provide on-call support.... which was required as someones tweet crashed the database!). At one point I had to quickly make a change to the site with no access to a computer nearby, I FTP'd to the webserver and edited the files on my slow 4 year old android phone. Overall great success, but I found myself really enjoying web dev over C# and microsoft technologies. The class all thought I'd be the first to get a job, I was probably one of the last :)
Self teaching myself web development I spent the next year and a half learning as much as I could about web development, unemployed I would often wake up and read until it was time to sleep, trying out some coding(though not as much as I probably should have done), most of the time it was building an understanding of various topics, analysis paralysis with the abundance of choice which was numbing compared to my ActionScript/Flash days and forming opinions to settle on and learn more about based on my own interests, what I believed/understood to be worth the time or was in demand by employers(didn't buy into PHP or AngularJS despite their job demand), I got into Node.JS and ES6. I wanted to play with Vagrant and Docker, graph databases like Neo4j, NoSQL like MongoDB, different frontend frameworks and tools(as well as the ones with node.js like express/koa, gulp, phantomjs, webpack, etc), template engines, mapbox(similar to google maps), ansible/salt and more. I figured I'd start with something small like doing the backend of the TEDx project with NodeJS and Mongo instead.
Job hunting Often when opportunities presented themselves however, I'd completely drop what I'm doing/learning and try to make the most of those(income or job application tests). Most employers through HR or agencies would turn me down due to no related degree, I'd be told I am not passionate or serious about a career as a developer. The growing gap in employment surely wasn't helping either! I had some success with an application/interview spanning over 2 months for a large company with a web department. I had made it down to the final 5 candidates, my technical test didn't pan out the best, I completely understood the answers the ended up wanting but was having difficulty arriving their based on what they were saying and expecting me to say. This was for a position that would take 3 hours to get their via transport with a mandatory 8am start. I figured if I could land the role I'd be able to move nearby in a month or so. Declined but offered an internship for 3 months to develop a dashboard to track metrics and produce a report. I asked some questions such as will I be responsible for the whole development, can I make my own choices on tech or can we leverage existing solutions instead of from scratch, would I have a mentosupervisor that I can get guidance from if needed). I was told if I have to ask questions like that I'm not right for the job and lost my chance at the internship..
Take a break from web studies/job-searching doing Lua game mods and some Python Little disheartened after the long job hunt and study getting no where, I didn't seem able to compete. I took a break from it all and started creating mods for a popular FPS co-op game. I also created a basic level editor earlier that year that imported the game data(JSON) into Maya(3D content program) with Python, then adjust it or import your own additions and export via Python to Lua that my Lua mod would import and allow for custom levels or other fun spawning prefabs. I decided to take on a challenge and do what no one else had luck with, increasing the max amount of players permitted in a networked level. I probably spent far too much time on that, got something tangible by December, and later in 2016 spent all 4 days of my easter weekend finishing it up for release, it was very popular within the community. Guides, discussions and youtube video's appeared for a while....but this work didn't seem like it'd help with securing a job in where I had put my efforts.
Transitioned from Windows to Linux full time After new years, my laptop running windows 8.1 wouldn't boot, the bootloader corrupted. Turns out it was due to fastboot feature having a random chance to cause that, I spent about 2 days trying to troubleshoot it with my phone browser. The solutions I came across were of no help, I booted into a live CD of linux and backed up what files I could to an external drive. Might as well get back into linux I thought so time to install Ubuntu and see what's changed since 2008. Lots of fun problems to solve, from installers with UEFI compatibility issues(had to learn to change this in BIOS), to installers giving me a black screen because my GTX960m didn't yet have proper support in Nouveau. Not long in I learn about QEMU/KVM for virtualizing Windows on top of linux with near native performance (93-98%) with full access to the GPU via passthrough. Sounds fun, so I learn a bunch and write notes, but there is quite a bit of difficulty following the various sources where some information is useful but others not, decide I might as well convert my notes into a decent markdown blogpost to help others out. By the end of it I had switched to Arch Linux and learnt that despite all other hardware meeting requirements, the mobile GPU writes to the Intel iGPU framebuffer. Accomplishing this at the time was not going to happen :( Continued use of linux especially with Arch involved quite a bit of maintenance and learning.
4 years since initial graduation got my first official dev job at a startup The guy I developed the TEDx project for reached out to me offering work at his Startup. Hell yeah a job, my bank account will finally see something positive :) Unfortunately it wasn't doing web development, but there were two devs working there already and I get paid to code fun IoT/home automation project stuff! Got a computer purchased at local store for me to put together and use that very day. Set it up to run Arch like I have at home, was comfortable in it now and it seemed pretty good for doing dev work on. Those two other devs didn't stick around for long, one was still doing CompSci at uni, they had been working on a Xamarin app to control the various devices in the office with, but kept running into bizarre issues. I didn't know Xamarin or C# well but pitched in where I could, some problems that troubled him were a breeze for me to solve. The other one was a recent graduate doing web development on contract, "cool!" I thought. Unfortunately neither knew how to use git, or familiar with Agile practices, nor cared much for documentation.. I was confused why the web developer used plain CSS and jQuery with all their code in a single JS file. Their HTML was based off some bootstrap template with heavy copy/paste instead of a templated languaged that would avoid the DRYness....little did I know I'd have to later do some maintenance on this without being authorized to fix it since it was functionally working to management and not worth time to refactor for future maintenance work.
Saved company a bunch of money While working there, I learnt a proprietary solution we were using, wrote documentation for it for future developers, multiple times I tried to communicate problems with that vendors software and better alternatives available while being told "deal with it, it's the best in the industry"(without any backing statistics/tests). It wasn't until the issues become glaringly obvious and the high costs to go forward with it(we were on an evaluation license) that management listened to the voice of reason. I did some research so I could back up my claims and presented a very popular and actively developed plus sponsored open source project. I spent time getting familiar with it and how to set it up, what drawbacks/limitations might exist(some features needed work but I believed I could contribute what was needed to get it on par), documenting and being sure that it was a solid replacement among the many other benefits it had going for it over the proprietary vendor solution.
Making tech choices On top of that, I picked up React-Native and Redux which was great to work with using ES6, as a solo developer with what I assume was not an ordinary workload being able to share the same codebase for both iOS and Android(plus any other supported platforms) was a great boon, the performance and dev features/speed were great compared to my other options. The choice also mixed great with my NodeJS background I had been building up prior to the job. I felt I made the right choice, setup some backend services to communicate with the larger open source project with our own additions over websockets to the mobile app. I designed the MVP app similar to my e-learning app from previous work but using JSON instead of YAML, and the JSON was generated/cached based on DB queries. The design gave a modulaflexible UI that adapted between phone and tablet.
Getting familiar with embedded IoT dev with C After that we had a business opportunity to pursue. An electronics engineer reverse engineered some products communication protocol, providing serial connection details and hex codes. It was my job to put together some hardware(Arduino) that would eavesdrop on that communication to the devices touchpad controller and allow us to control the device via the Arduino. I enjoyed learning about some protocols like InfraRed in an earlier project but this was a step up for me, I had never worked with C and struggled with the lack of features I take for granted with scripting/dynamic languages. Parsing the binary/hex output into packets and verifying/identifying them and responding with the correct timing was the biggest hurdle for me. I only had one UART serial connection to work with, having to manually switch between listen/send, with limited buffer for the bytes as well as not blocking the device from updating it's controller and keeping that controller responsive while still being able to inject our own instructions as if we were the controller or device providing updates. Debugging I had no idea how to go about, this was hardware, not what I'm used to where I have breakpoints in code and can view the current state. I did naive debugging with text logging via serial, but this was a bad idea since processing that affected timing causing more bugs! :D It was semi viable in some situations as long as the string was minimal, error codes instead of descriptions or long values.
Feelings of success I got that embedded IoT project to work as we wanted in the end, being controlled by the NodeJS server or mobile phone app, we demo'd the product to the company that owns/sells the device product and they loved it and were amazed at what we achieved. This was a client with big money and international business. I know my code wasn't great but I learnt this and pulled it off in a reasonably short time, I felt proud of this milestone, a company like that being so impressed and seeing value in what I had done almost entirely by myself(didn't have the reverse engineering chops yet). So this was going to go ahead, I enjoyed the project and wanted to learn more so I read up a bunch on MCUs and SBCs, sensors and the like to get a good feel what is out there, what we could do with them, flexible designs for a product so we could provide a similar service for other companies. Management wasn't too happy with their sole developer being distracted by such education efforts and wanted me to focus on other tasks(I did a bunch of this after work in spare time as it really interested me, side note I have a problem where I get rather consumed with what I work on/learn, I'll chalk it up to my ADD). I had done my job creating the MVP, negotiations were to go ahead, so I'm moved back to the core product.
Craving to learn more Management wants to begin beta testing on some hardware they're ordering(small headless servers), anyone(company members only at present) beta testing the product will have to fork out a few hundred for this themselves. I state we could totally test on considerably cheaper hardware with SBCs like Pi's, CHIP($9) or Pine64($29) for example...get the usual no stay out of it. Week or so later hardware arrives.
Linux, filesystems, automation and network installs My next task is to install an OS onto these machines that have no monitor output or keyboard/mouse. Automated PXE install(never actually done it before) sounds good, problem is the network availability from these machines were unreliable on boot. Ran into a few issues, but after learning about PXE(which turned out not to be viable), I came across iPXE. We got a new batch of machines, these newer models BIOS didn't support iPXE like the older ones....burned iPXE image to USB, got a serial console setup and chainloaded the kernel and initial ramdisk over an http server via ipxe script. Had issues with Debian/preseed/drivers but openSUSE went pretty well(planned to later use AutoYast with Ansible to automate the whole thing and get it all in git for traceability). BTRFS on a small SSD though(openSUSE default partition) wasn't a good idea as I soon found myself running out of space, thankfully I had been reading up on the various filesystems with pros/cons prior, especially on BTRFS knowing that was the default and how it's quite different from the usual ones(read about it often on news blogs I follow for years). Dealt with the issue but had some other problems that seemed BTRFS specific with Docker(deploying projects with Docker for the benefits it brings), decided initially I'll stick to what I'm most familiar with EXT4 and repartitioned. Documented, investigated, filed issues along the way.
Burnout, "developers are a dime a dozen", am I cut out for this professionally During that job I had stressed myself into a burnout, I've left out many other things especially on those that I looked into heavily but didn't quite get time to implement, such as CI/CD systems for mobile apps(all planned out and decided on), dev machines(zsh with dot files and package list to install, arch and osx, mostly planned out in anticipation for new devs we were going to bring on a while back), additional projects and protocols to be clued up on, project management processes/workflow(again for new devs that didn't end up happening). I felt this was stretching myself very thin, that I wasn't getting the opportunity to grow in any particularly area for my career, I was okay in various areas and understood things well, but my coding was not to the quality/speed I'd like, I was forgetting things that I'd have to relearn. I complained to management at one point I felt this was unrealistic to expect a developer to cover so many areas(web dev, mobile dev, embedded, sysadmin/devop, design/tech decisions(architecture?), etc) as common place, and task them to frequently switch between these areas/contexts. I was told "Developers are a dime a dozen". I disagreed that anyone sane would be doing all these things for minimum wage(I did like the freedom of development choices and growing my skills, no one else wanted to hire me to code, what's a bit of sanity? ), it was becoming a problem, I didn't sign up for all of this, and thought I'd finally have other developers to work with, maybe even learn from.
Resignation. Where to now I resigned from that job after a meeting revealed how little I was valued(among other things), despite my honest belief that without my efforts they would not have been able to afford the talent needed to achieve where they are today. But now I'm back to the job hunt with an obvious lack of good reference from the company I spent half a year at. I feel I have a broad range of skills, not many employers will be interested as they advertise for more honed skillsets... I might be perfect for a startup, but I'm not fond of the chances going through previous experiences again. Do I try to promote my range of dev/IT skills or do I spend time unemployed until I'm good enough for a junior role in one skillset such as web dev? I'm in my thirties now and would like to earn more than minimum wage doing what I love.
submitted by kwhali to cscareerquestions [link] [comments]

Architectural Writeup: I built an open-source hosting platform for node.js microservices. Ask Me Anything!

Hello. I built a 100% open-source hosting platform for JavaScript microservices, in Javascript. Ask me anything!
The project: http://hook.io
The source code: http://github.com/bigcompany/hook.io
Built with: Node.js, CouchDB, and Github Gist. Node Package Manager modules are fully supported.
Architectural details can be found a bit further down.
Interested, but too busy to read this now?
If you'd like, you can run the following Curl command to opt-in to our mailing list. We'll periodically send you updates about the project.
curl [email protected]
Replace [email protected] with your email address.
What is the purpose of hook.io?
hook.io is an open-source hosting platform for webhooks and microservices. The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms. hook.io provides an easy way to create, host, and share microservices. Through developing many small re-usable microservices, you can reduce the complexity of your applications while improving stability.
As a simple comparison, a single microservice could be considered the equivalent to what would be a single route in an Express.js application.
Why or how would I want to use hook.io?
You should want to use hook.io if it can make your life as a developer easier.
The most basic use-case for hook.io is quick and free webhook hosting. You can instantly create a simple hook which parses the incoming parameters of an HTTP request and performs arbitrary actions on it. For instance: Send an SMS message every-time the Hook is requested as a webpage. Since NPM is supported, you can re-use any existing library from the extensive NPM module repository. You can also configure Hooks to be executed on a schedule using a Cron pattern.
At this point, we will take note that Hooks are fully streaming. Inside your Hook source code you have direct access to Node's http.IncomingMessage and httpServer.ServerResponse request and response streams. This means you can treat the inside of a Hook the exact same way as if it were inside a streaming middleware in a regular node http server. Having direct access to these streams is extremely useful and I am unsure if any other microservice hosting providers currently offer this feature.
More advanced use-cases for hook.io would be replacing individual parts of your application with microservices. Instead of adding a new route or module to your application , you could instead create a Hook responsible for only one unit of functionality and call it using a regular HTTP request from inside your existing application. One specific example could be building a Hook with a custom theme which acts perfectly as a stand-alone sign-up form. This sign-up form can then be loaded server-side in your application using one HTTP get request. It might sound complicated at first, but integrating microservices with your existing application is actually very easy. In the upcoming weeks we'll work on releasing specific guides for separating application functionalities into microservices.
An even more advanced usage would be building a suite of Hooks and composing them to create new and unique applications! Since every Hook understands Standard In and Standard Out and Hooks can easily call other Hooks from inside each other, there are an endless amount of combinations to be made. This composability enables the foundation for Flow-based Programming without imposing any specific rules for composition. A specific example could be building a Hook ( called "tar" ) responsible for taking in STDIN and streaming out a compressed tar file. Once this Hook is created, you could easily pipe the results of another Hook ( such as an image downloader ) into the "tar" Hook. These Hooks don't exist yet, but I am certain someone will build them in the near future.
Unix Pipes!
hook.io is very friendly with Unix Pipes. Using STDOUT and STDIN you can connect hook.io to your existing Unix Tool chain. The best way to explain this concept is to review the Curl examples.
Here is one specific example of using hook.io to flip a cat upside-down with cat and curl. You will need to provide your own cat.png
cat cat.png | curl -F 'degrees=180' -F '[email protected];type=image/png' http://hook.io/Marak/image/rotate > upsidedown-cat.png
The Data!
If you noticed in the last example, hook.io is fully capable of streaming binary data. It also supports streaming file uploads, multipart form uploads, and will assist in parsing all incoming form fields, JSON, and query string data.
Software Architecture
The core software architecture of hook.io is Resource-View-Presenter ( RVP ).
Resources are created using the npm resource module.
View-Presenters are created using the npm view module with regular HTML, CSS, and JavaScript. The same View-Presenter pattern is also used to implement custom theming for Hooks see: hook.io/themes
Important dependencies
mschema - Provides validation through-out the entire stack.
big - Small application framework. Provides website app which hook.io extends.
resource-http - Provides core HTTP server API. Helps in configuring Express with middlewares like Passport
resource-mesh - Provides a distributed event emitter mesh using a star network topography. hook.io primarily uses this module as a monitoring agent to report status back to our monitoring sink.
resource-user - Provides basic user API ( signups / logins / encrypted passwords / password resets / etc )
Server Architecture
There is one front-facing HTTP server and any number of Hook Workers.
The front-facing server is responsible for serving static content, maintaining user session data, and piping requests between the client and Worker.
Workers are responsible for executing user-submitted source code and piping their responses through the front-facing server to the client.
At this point, we will take note that communication between the Hook and client remains streaming throughout the entire architecture. This gives hook.io the ability to perform complex tasks like transcoding large video streams without worrying about clogging up any parts of the system with large memory buffers.
Hook Servers and Hook Workers are immutable and stateless to ensure stability of the platform. They are designed to fail fast and restart fast. mon is used as a process supervisor.
This architecture can theoretically scale to upwards of 10,000 concurrent connections. Realistically, it will probably be closer to 4,000. When the site needs to scale past this, we will create several front-facing servers and load balance incoming HTTP requests to them using DNS.
Hook and User configuration data are stored in a CouchDB database. If the database grows too large, we will split it into several smaller database severs sharded by the first alphabetic letter of every document's primary key.
Source code for Hooks is currently stored on Github as Github Gists. I'd imagine sometime in the future we will add the option to store and edit source code directly on hook.io itself. The project is open-source, so you could be the first to open up the issue!
Questions? Comments? Feedback?
Let me know! Open-source projects get better with collaboration. Every comment and piece of feedback counts.
Maybe take five minutes to try the platform out? You might like it!
The dependency tree for hook.io is re-used in many applications. Several of these dependencies I maintain myself. If you have feedback or comments about any specific dependency let me know!
submitted by _Marak_ to node [link] [comments]

hexadecimal to bytes in javascript JavaScript - Streams Node js Tutorial In Hindi #6 Buffers and Streams Working with Binary Data and Buffers In Node and Node-Red ... NodeJS Buffer Node.js: Migration from JavaScript to TypeScript 16 Node js API Buffer Node.js Streams Tutorial - An Introduction to Node.js Streams สอน NodeJS ตอนที่ 8 - Buffer Node JS Tutorial for Beginners #13 - Streams and Buffers ...

The protocol buffer compiler for JavaScript has many options to customize its output in addition to the library and import_style options mentioned above. For example: For example: binary : Using this option generates code that lets you serialize and deserialize your proto from the protocol buffers binary wire format. When data is read from a file or network it is read byte by byte into a data buffer. Data Buffers are temporary storage used for transferring data. To work with binary data we will need access to these buffers. To work with buffers in node and node-red we use the buffer object. The following screen shot of the node command line shows how we work with characters using the buffer object. The ... Take note the res.setEncoding('binary'); and Buffer.from(chunk, 'binary') lines. One sets the response encoding and the other creates a Buffer object from the string provided in the encoding specified previously. Everyone here is on the right track, but to put the bed the issue, you cannot call .setEncoding() EVER.. If you call .setEncoding(), it will create a StringDecoder and set it as the default decoder.If you try to pass null or undefined, then it will still create a StringDecoder with its default decoder of UTF-8.Even if you call .setEncoding('binary'), it's the same as calling .setEncoding ... A buffer is a space in memory (typically RAM) that stores binary data. In Node.js, we can access these spaces of memory with the built-in Buffer class. Buffers store a sequence of integers, similar to an array in JavaScript. Unlike arrays, you cannot change the size of a buffer once it is created. when deserializing a Binary will return it as a node.js Buffer instance. [options.promoteValues] Object: false: when deserializing will promote BSON values to their Node.js closest equivalent types. [options.fieldsAsRaw] Object: allow to specify if there what fields we wish to return as unserialized raw buffer. [options.bsonRegExp] Object: false Binary-to-Text Encodings. base64: Base64 encoding. When creating a Buffer from a string, this encoding will also correctly accept "URL and Filename Safe Alphabet" as specified in RFC 4648, Section 5. hex: Encode each byte as two hexadecimal characters. Legacy Character Encodings. ascii: For 7-bit ASCII data only. Generally, there should be no ... Buffer, streams, binary data… still many big words. Well, let’s try to tackle these big words from the last to the first. Binary data, what’s that? You probably already know that computers store and represent data in binaries. Binary is simply a set or a collection of 1s and 0s. For example, the following are five different binaries, five different sets of 1s and 0s: 10, 01, 001, 1110 ... NodeJS write binary buffer into a file I can't rewrite a file that I am getting from a binary buffer, I have checked with the original file and all bytes are the same. This is the file create from NodeJS: You can only send over things that have a valid JSON representation, something Buffer and Date objects don't really have. Is there a way to send binary data efficiently to a child process. You can open an additional pipe when spawning the child process:

[index] [15014] [8949] [28618] [21235] [25601] [22930] [14939] [26792] [28591] [24782]

hexadecimal to bytes in javascript

Working with Binary Data and Buffers In Node and Node-Red - Duration: 17:26. Steve Cope 2,532 views. 17:26. Building an Alexa Skill in 20 minutes using Node.js - London Node User Group ... Working with Binary Data and Buffers In Node and Node-Red - Duration: 17:26. Steve Cope 2,561 views. 17:26 . Events and Event Emitter in Node.js - Duration: 7:01. Programming with Mosh 36,729 ... Working with Binary Data and Buffers In Node and Node-Red - Duration: 17:26. Steve Cope 2,399 views. 17:26. Node.js Tutorial For Beginners Part 8 - Buffers, Streams, ... Alright gang, in this node js tutorial I'll show you how streams and buffers work, so that we're fully prepared to use them within our node application. Stre... Best Binary Options Strategy 2020 - 2 Minute Strategy LIVE TRAINING! - Duration: 43:42. BLW Online Trading Recommended for you. 43:42. Run JavaScript from Python - Duration: 5:39. AllTech 5,621 ... Working with Binary Data and Buffers In Node and Node-Red - Duration: 17:26. Steve Cope 2,532 views. 17:26. 37 videos Play all Node JS Tutorial for Beginners The Net Ninja; Node.js ... Słowa "stream" i "buffer" sprawiają, że wpadasz w zakłopotanie? Myślisz, że to zagadnienia, które jest w stanie zrozumieć tylko doświadczony programista? Wcale tak nie jest Jeśli chcesz ... Explanation of Buffers and Streams in Node.js Buffer Size: Buffers are simply an array of integers(byte), so the length of the buffer is number of bytes that... http://www.steves-internet-guide.com/understanding-buffers-node-red/ -In this video tutorial we look at how character data is represented and stored in node ... Best Binary Options Strategy 2020 - 2 Minute Strategy LIVE TRAINING! - Duration: 43:42. BLW Online Trading Recommended for you

http://binary-optiontrade.alogeal.cf