Useful code snippets
These are useful code snippets that I have learned and put together for future reference.
I hope that anyone reading this will find something useful here.
Remember (!)
You can't go fast in the long term by rushing. If you want to go fast in the long term, - take your time and do a good job!
PHP API GET call
PHP
$events = collect (
json_decode (
file_get_contents('https://jsonplaceholder.typicode.com/todos/1'),
true
)
);
/*
Illuminate\Support\Collection Object
(
[items:protected] => Array
(
[userId] => 1
[id] => 1
[title] => delectus aut autem
[completed] =>
)
)
*/
PHP list()
Assign variables as if they were an array.
PHP
$my_array = array( "Dog", "Cat", "Horse" );
list( $pet, $bad_pet, $animal ) = $my_array;
echo "I have several animals, a $pet, a $bad_pet and an $animal.";
Laravel Collection - filter many levels deep
PHP
$collection = collect($array);
$filtered = $collection->filter(function ($item) {
return $item->employee->id == 7;
});
Laravel Collection - sort many levels deep
# PHP
$collection = collect($array);
$sorted = $collection ->sortBy(function ($item) {
return $item->article->tag->id;
})
JSONP - Cross-domain Ajax call with credentials & session (GET only)
# PHP
session_start();
echo $_GET['callback'] . '(' . json_encode($_SESSION) . ')';
# Javascript
$.getJSON("http://login.graftik.lv/libm/getloggedinuser.php?callback=?", function(json) {
console.log(json);
});
# Collection - add property form different database table
# PHP
$users = Users::all();
$users->each(function($user, $key) {
$user->posts_count = Posts::where('user_id', $user->id)->count();
});
// Here's a more performance efficient solution.
$users = Users::all();
$postsCounts = DB::table('posts')
->select(DB::raw('count(*) as posts_count, user_id'))
->whereIn('user_id', $users->pluck('id')->all())
->groupBy('user_id')
->get();
$users->each(function($user, $key) use ($postsCounts) {
$user->posts_count = $postsCounts->firstWhere('user_id', $user->id)->posts_count;
});
# Fetch API with async/await
# Javascript
async function fetchUsers(url)
{
const res = await fetch(url)
const data = await res.json()
return data
}
fetchUsers('https://jsonplaceholder.typicode.com/users')
.then(data => {
console.log( data )
})
# Elvis operator
# PHP
$user = $user ?: create('App\User');
// Is the same as $user = ($user == true) ? $user : create(‘App\User’);
# Model scope
# PHP
scopeSomething()
$model->something()->
# Vue.js - Watch route and force the component to update
# Javascript
watch: {
'$route' (to, from) {
this.$forceUpdate()
}
},
# Change only one param in Vue router
# Javascript
<router-link
:class="{ active: user.id == selected }"
:to="{ name: 'MainRoute', params: {
selectedUser: user.id,
selectedView: $route.params.selectedView,
selectedMonth: $route.params.selectedMonth,
selectedYear: $route.params.selectedYear}}">
{{ user.username }}
<router-link>
# Javascript - find object by value
# Javascript
some()
// will return true or false, depending on the condition.
// It tests, does at list one element fits the condition
find()
// will return an item itself (the first matched item),
// if the condition evaluates to true, and undefined if it evaluates to false.
findIndex()
// will return an index of the item (the first matched index),
// if the condition evaluates to true, and -1 if it evaluates to false
filter()
// will create a new array with all items,
// which fit the condition (otherwise it returnes an empty array)
const users = [
{
"type": "User",
"userId": "5b774905c2b2ac0f33ac4cc7",
"name": "Mike"
},
{
"type": "User",
"userId": "5b77490f3084460f2986bd25",
"name": "Pater"
}
];
const someObject = users.some(item => item.name === 'Mike');
const targetObject = users.find(item => item.name === 'Mike');
const targetIndex = users.findIndex(item => item.name === 'Mike');
const filteredObjects = users.filter(item => item.name === 'Mike');
# Javascript - filter search results (case insensitive) + order
# Javascript
new Vue({
el: '#app',
data: {
stories: [
{ title: 'Once upon a time in Berlin', author: 'Alex', upvoted: 10 },
{ title: 'The sky is the limit', author: 'Tobi', upvoted: 100 },
{ title: 'Once upon a time in Berlin', author: 'Patrick', upvoted: 10 },
{ title: 'Once upon a time in Milan', author: 'Patrick', upvoted: 11 },
{ title: 'Until the end', author: 'Tobi', upvoted: 9 },
],
searchTerm: ''
},
computed:
{
filteredStories()
{
let filteredStories = this.stories.filter((story) => {
return story.author.toLowerCase().includes(this.searchTerm.toLowerCase());
})
let orderedStories = filteredStories.sort((a, b) => {
return b.upvoted - a.upvoted;
})
return orderedStories;
}
}
});
# Javascript - dynamic imports
# Javascript
{
path: '/',
component: () => import('./components/myComponent.js')
}
# PHP - Null coalescing operator
# PHP
$foo = $bar ?? 'something';
$foo = isset($bar) ? $bar : 'something';
# SQL - Search/Filter by entries in another table
# SQL
SELECT inkasso.ink_number, inkasso.full_name
FROM inkasso
WHERE
EXISTS (SELECT 1
FROM
inkasso_phones
WHERE
inkasso_phones.ink_id = inkasso.id AND
inkasso_phones.phone LIKE '%2%'
)
LIMIT 3
# PHP
$inkasso = Inkasso::whereExists(function ($query) {
$query->select(DB::raw(1))
->from('inkasso_phones')
->whereRaw('inkasso_phones.ink_id = inkasso.id')
->whereRaw("inkasso_phones.phone LIKE '%26542973%'");
})->take(100)->get();
$inkasso_phones = InkassoPhones::whereIn('ink_id', $inkasso->pluck('id'))
->get();
$inkasso->each(function($item, $key) use ($inkasso_phones)
{
$item->phones = $inkasso_phones->where('ink_id', $item->id);
});
whereHas()
// Retrieve posts with at least one comment containing words like foo%...
$posts = App\Post::whereHas('comments', function ($query) {
$query->where('content', 'like', 'foo%');
})->get();
$posts = Post::with('comments')->get()
# Format database table for forums / StackOverflow
"Create Table."1.phpMyAdmin -> Print 2.Copy to excel 3.From excel to text-table tool 4.From text-table tool to VS Code (for nice colors - Java highlighting)
# PHP
$foo = $bar ?? 'something';
$foo = isset($bar) ? $bar : 'something';
# SQL - sort by related table with multiple entries
# SQL
SELECT
parents.fullname,
children.parent_id,
GROUP_CONCAT(children.fullname ORDER BY children.fullname ASC) AS children_names
FROM
parents
JOIN
children ON parents.id = children.parent_id
GROUP BY
children.parent_id
ORDER BY
children_names
SELECT
parents.fullname,
children.parent_id,
array_agg(children.fullname ORDER BY children.fullname ASC) AS children_names
FROM
parents
JOIN
children ON parents.id = children.parent_id
GROUP BY
children.parent_id,
parents.fullname
ORDER BY
children_names
# Laravel - order by column in related table
Most common example is a forum which shows topics in order by the latest post in that topic. How to do that?
First, this is our relationship in
app\Topic.php:# PHP
public function posts()
{
return $this->hasMany(\App\Post::class);
}
Now, you need to realize that this wouldn’t work:
$topics = Topic::with('posts')->orderBy('posts.created_at')->get();
What we actually need to do – two things, actually:
1. Describe a separate relationship for the latest post in the topic:
public function latestPost()
{
return $this->hasOne(\App\Post::class)->latest();
}
2. And then, in our controller, we can do this “magic”:
$users = Topic::with('latestPost')->get()->sortByDesc('latestPost.created_at');
Let’s test it out – here’s our topics table.
Now, let’s show all users order by latest post. If you look at the data, user ID 2 should come first with latest post on 27th day, then user ID 3 with post on 26th day, and then user ID 1 with post on 25th.
$users = User::with('latestPost')->get()->sortByDesc('latestPost.created_at');
foreach ($users as $user)
{
echo $user->id .
' - ' .
$user->latestPost->title .
' (' . $user->latestPost->created_at . ')';
}
So, isn’t that sweet?
# Laravel - limit with()
# PHP
$posts = Posts::where('id', $id)
->with(['comments' => function($query) {
return $query->take(10);
}])
->first();
# SQL - sort by sum of another table
# SQL
SELECT
users.username, sum(donations.donation) as donation_sum
FROM users
INNER JOIN donations
on users.id = donations.user_id
WHERE
donations.date between '2019-01-01' and '2019-12-31'
GROUP BY
users.id
HAVING
sum(donations.donation) = 400
ORDER BY
donation_sum
# SQL - filter and sort by anything
To filter and sort by anything first create indexes using Adminer. And then the sorting becomes fast. Make sure there is no group by otherwise it won’t use the index for lookup. Using explain, we can see that the Sorting was the bottleneck.
CREATE INDEX idx_inkasso_end_date_desc ON inkasso (end_date DESC NULLS LAST);
# SQL
SELECT
inkasso.id,
inkasso.i_date,
inkasso.end_date,
inkasso.full_name,
inkasso.debt_sum,
inkasso.pay_sum,
inkasso.rest_sum,
inkasso.pers_code,
inkasso.ink_number,
inkasso.ink_sum,
inkasso.employer,
inkasso.credit_level,
inkasso_items.total_paid_ink,
inkasso_items.payment_count,
end_statuses.description AS end_status,
inkasso.note,
(SELECT description FROM inkasso_statuss where inkasso_statuss.id = inkasso.status) AS status_desc,
(SELECT description FROM inkasso_statuss where inkasso_statuss.id = inkasso.parent_status) AS parent_status_desc,
(SELECT description FROM companies where companies.id = inkasso.parent_id) AS parent_company,
(SELECT description FROM companies where companies.id = inkasso.grand_parent_id) AS grand_parent_company
FROM inkasso
LEFT JOIN (
SELECT
ink_id,
sum(ink_sum) AS total_paid_ink,
count(*) AS payment_count
FROM
inkasso_items
GROUP BY ink_id
) inkasso_items ON inkasso.id = inkasso_items.ink_id
LEFT JOIN end_statuses ON inkasso.end_status = end_statuses.id
WHERE
inkasso_items.payment_count > 1
ORDER BY
inkasso.end_date desc NULLS LAST
LIMIT 10
# Vue.js - two way binding input for components
First declare v-model on your component
# Javascript
<ml-sidebar-text v-model="fullname"></ml-sidebar-text>
Next declare a value prop on component and bind it to input value.
And emit input value on
@input
export default {
template: `
<input
:value="value"
@input="$emit('input', $event.target.value)"
/>
`,
props: ['value'],
};
# Slow order by - create index
Ordering can take a lot of time. But if an index is created asc or desc depending on the need the time goes down dramatically for executing the query
CREATE INDEX idx_comments_created_desc ON inkasso_comments USING btree (created DESC)
# Login using id
login-using-id/48
# Adminer inline editing
Use select buttons to enable checkbox and editing instead of sql query.
Ctrl + click
# Upgrade Laravel
1.Update all composer.json dependency versions to the latest versions (check the latest version on packagist website).
Sometimes it will say v.* - added support for latest Laravel version. (Make sure they are compatible)
2.Run composer update
3.Go through the upgrade guide
4.Permissiosn - sudo chmod -R 777 /var/www/php7/melnalapa5.8.2/
5.Replace all absolute links to the new address
6.Re-create symlinks in your new project folder
(ln -s /var/www/php7/melnalapa5.8.2/storage /var/www/php7/melnalapa5.8.2/public/storage)
7.Update.bashrc paths
8.Update Auto Hotkey scripts
9.Update /etc/apache2/sites-available/ links
10.Update browser bookmark links
11.Update TV bookmark url
12.Update cron path
* * * * * php /var/www/php7/melnalapa5.8.2/artisan schedule:run >> /dev/null 2>&1
13.Update /home/blacklapa/EmailSenderSend.sh path to script
14. Update /etc/supervisor/conf.d/laravel-worker.conf
# Run a command in background
[note: even better - use tmux]
sudo nohup php /var/www/php7/melnalapa5.8.2/artisan emailsender:send &
# Dump (toArray) to console without interruption
This requires a dump server to be running (like Laravel Telescope). It won’t show output in the browser - only in console. And it won’t stop the execution of script.
# PHP
dump($outbox->toArray());
# Update Laravel Telescope
1. Re-publish resources
C:\Users\Martins\Desktop\base.melnalapa.lv\trunk\php7\melnalapa5.8.2 (trunk/php7/melnalapa5.8.2) λ php artisan telescope:publish
2. Commit & update repository svnup
3. blacklapa@melnalapa:~$ sudo chmod -R 777 /var/www/php7/melnalapa5.8.2/storage/
# Linux empty file content
user@host:$ > /path/to/file
# Linux search in files
user@host:$ grep -Ri checkEmployeeCalls /var/www/*
# Add VPN account in Mikrotik
Winbox > PPP > Secrets > Select > Copy Name: username Password: password Service: pptp Local Address: 192.168.1.1 Remote Address: 192.168.1.X
# Linux send e-mail from terminal
# sendmail [email protected] << EOF subject:Subject here from:[email protected] Body message here... EOF or.. $ mail -s 'Subject here..' [email protected] <<< 'Body here..'
# Linux enable reboot
On some systems reboot is disabled by default, to enable it:
sudo chmod a+s /sbin/reboot
# Linux enable/disable ssh root login
To disable root SSH login, edit
/etc/ssh/sshd_config with your favorite text editor.
[root@root ~]# nano /etc/ssh/sshd_config
Change this line:
# PermitRootLogin yes to this: # PermitRootLogin no
# Atļaut ārējo pieeju pēc IP (MikroTik)
Winbox > login > IP > Firewall > NAT
+ vai.. var arī nokopēt kādu esošo un nomainīt tikai Src. Address
Tad var pielikt Add Comment kam tas ir
# Edit cron jobs
First change default editor to nano
There are 2 ways to change your default editor:
sudo update-alternatives --config editor
or
Edit your ~/.bashrc or ~/.bash_profile
export EDITOR=/usr/bin/nano
And run
source ~/.bashrc
Then to edit cron jobs:
$ sudo crontab -e
# PHP convert assoc array to object (nested)
Notice also how data is passed in javascript
# Javascript
Export.download({
selectedOptions: this.selectedOptions
}).then(response => {
this.isDownloading = false
})
class Export {
static download(data) {
return axios.post(`${store.rootPath}api/download-stats`, data)
}
}
# PHP
public function downloadStats(Request $request) {
$selectedOptions = convertToObject($request->selectedOptions);
$statistika_filename =
KompanijuPortfolio::generateStatistika( $selectedOptions );
return $statistika_filename;
}
Converted array to object
{
"0": array:3 [
"selected_portfelis" => "2019-04-09"
"selected_level" => 2
]
}
{
"0": {
"selected_portfelis": "2019-04-09"
"selected_level": 2
}
}
# Enable HTTPS/SSL
First you need to get and upload to your server SSL Certificate files (.crt and .key)
Then, you already have a Virtual Host file in
/etc/apach2/sites-available where it is listening to port 80 for HTTP. You need to copy it and rename it something like ssl... and change the listening port to 443 for HTTPS
For example -
/etc/apache2/sites-available/my-site-ssl.conf
<VirtualHost *:443> ServerName php7.melnalapa.lv DocumentRoot /var/www/php7 Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains" SSLEngine on SSLCertificateFile /etc/ssl/base.melnalapa.lv.crt SSLCertificateKeyFile /etc/ssl/private/base.melnalapa.lv.key </VirtualHost>
Or use Let’s encrypt Certbot to do it all automatically for you for all hosts. Plus renew.
Allow access from outside for all before renewing...
$ sudo certbot --apache $ sudo certbot renew --dry-run or certbot renew && restartapache
Redirect all http traffic to https
In port 80 virtual host add this
<VirtualHost *:80>
ServerName webmail.samhobbs.co.uk
RewriteEngine on
RewriteCond %{HTTPS} !^on$ [NC]
RewriteRule . https://%{HTTP_HOST}%{REQUEST_URI} [L]
</VirtualHost>
# Enable HTTP/2.0 in Apache
1. Dowload Apache >= 2.4.24
2. Make sure you have HTTPS enabled first
3. Enable http2 module - sudo a2enmod http2
4. In your Apache config file
(/etc/apache2/apache2.conf), you can either add global support by adding this line: Protocols h2 h2c http/1.1
...Or you can activate it for individual, secure vhosts like this
<VirtualHost ...> ..... Protocols h2 http/1.1 </VirtualHost>
5. Required: You must use PHP FPM instead of mod_php for Apahce
sudo apt install php7.0-fpm Install the PHP FastCGI module for PHP 7.0 (replace with “7.1” if desired):
sudo apt install php7.0-fpm Enable the required modules, proxy_fcgi and setenvif:
sudo a2enmod proxy_fcgi setenvif Enable php7.0-fpm:
sudo a2enconf php7.0-fpm Disable the mod_php module:
sudo a2dismod php7.0 6. Required: Change Apache MPM from "prefork" to "event"
Since the default "prefork" MPM (Multi-Processing Module) is not fully compatible with HTTP/2, you’ll need to change Apache’s current MPM to "event" (or "worker").
First, disable the "prefork" MPM:
sudo a2dismod mpm_prefork Enable the "event" MPM:
sudo a2enmod mpm_event Restart Apache and PHP 7.0:
sudo service apache2 restart sudo service php7.0-fpm restart # Same website address to different servers (ports)
Imagine a situation where you need to upgrade your server. You install a new server, you copy over the
/var/www/ directory but how will you access it? You could change port forwarding to now point to the new server. But there is one problem - now your old server won’t be accessible anymore.
Different ports to the rescue! Instead what we can do is create a new port to forward (for example 8886 for http and 8887 for https, really anything above 1024 should be fine). And now you can access the older server website using port 80 and 443 and access the new website on the new server by website.com:8886 and website.com:8887. That way all sub domains will still work properly.
1. If you’re using MikroTik router go to IP -> Firewall and copy an existing entry.
2. Change the Dst. Port: to your new ports 8886 and 8887
3. Open the Action tab and change the To Address: to your local IP e.g. 192.168.1.113
4. In your new server you now need to listen to that port and server name coming in to your server. So create a new .conf file in /etc/apache2/sites-available/new-site.conf like this
Listen 8886
<VirtualHost *:8886> DocumentRoot "c:/xampp/htdocs/tooltip" ServerName base.melnalapa.lv <Directory "c:/xampp/htdocs"> </Directory> </VirtualHost>
So now when you go to base.melnalapa.lv:8886 it will be passed to the new server at 192.168.1.113 and it will listen to port 8886 and serve it’s DocumentRoot :)
Bonus tip
Sometimes you may want to hide the port number in your URL address, because your application is written in a specific way. So you can force it forward all traffic coming from a specific external IP to an internal server on a specific port.- Go to IP > Firewall > NAT > Select entry > Action tab > To Ports: 8886
So now you can go to https://base.melnalapa.lv/ from a specific IP adddress and it will forward the request to 192.162.1.113 and port 8886
# Logger from anywhere
This logger allows me to dump to output and database from literally anywhere (frontend, backend, different site)
https://php7.melnalapa.lv/melnalapa5.8.2/public/log-viewer
# Javascript
// axios
axios.post('https://log.melnalapa.lv/', { log_message: 'test' })
// Or using a class
import Logger from '../../../Logger.js'
Logger.logMessage({
log_message: error_message
})
# PHP
// GET (URL limit 5000):
file_get_contents("https://log.melnalapa.lv?log_message=" . urlencode(print_r(substr($logdata, 0, 5000), true)));
//POST (No limit, needs cURL extension):
$curl = curl_init('https://log.melnalapa.lv');
curl_setopt($curl, CURLOPT_POSTFIELDS, ['log_message' => 'whatever']);
curl_exec($curl);
curl_close($curl);
# Laravel - firstOrNew
Instead of checking of an entry exists in database, we an create it if it doesnt or update it if it does like this.
# PHP
$settings = OperatoruStatistikaSettings::firstOrNew([
'employee_id' => $employee_id,
]);
$settings->$field = $value;
$settings->save();
# Vue.js - Two way data binding on components for multiple props
Usually you can only have a two way data binding using v-model but you can use the .sync modifer to have as many two way props as you want. And it doesn’t throw the warning of “Avoid mutating props directly…”
# Javascript
// On component
<component :name.sync="user.firstname">
// To change the prop from inside the component
this.$emit('update:name', 'Johhny Ive')
* Note: .sync will not work with expressions - only variable names
<div v-for="(item, index) in myArray">
<my-component :text.sync='myArray[index]'>
</div>
# Javascript spread operator
Use all the values from an arry or object
# Javascript
data()
{
return {
filters: {
destinationNumber: 'destinationNumber',
textDecoded: 'textDecoded',
status: 'status',
},
}
}
SMS.get({
...this.filters,
datefrom: this.filters.datefrom,
dateto: this.filters.dateto,
})
# Vue.js - Fix root link always active & sub links
When you use the
router-link-active then the root link with / will always be active. To fix this, specify the exact attribute on the root link.
# Javascript
<router-link :to="{ name: 'index' }" exact>
Link to homepage
</router-link>
<router-link :to="{ name: 'reports' }" :class="{ 'router-link-active': isReportActive }">
Reports link
</router-link>
isReportActive()
{
return this.$route.name === 'report-detailed' || this.$route.name === 'report-by-days'
}
# Javascript - wrap function in a promise
Sometimes we need to fetch some data and after it's done, we need to call another data fetch function and only after the second one is finished we want to turn off loading and perform some other actions. This can be done using a simple promise wrap. Let's look at a simple sendSMS function.
P.S. There is also a more simplified async/await way of doing this but for simplicity's sake we won't go into that rightn now.
# Javascript
sendSMS()
{
this.isSendingReply = true
SMS.send({
sms_text: this.smsReplyText,
phone: this.openedConversation.sender_number,
sms_type: 'inbox_reply',
}).then(response => {
this.fetchConversation().then(() => {
// Perform these actions only when both SMS.send and fetchConversation have finished
this.smsReplyText = ''
this.isSendingReply = false
})
})
},
fetchConversation()
{
/**
* A promise lets us know when an async operation has finished
* so that we can use .then()
*/
return new Promise((resolve, reject) => {
SMS.getInboxConversation({
sender_number: this.sender,
}).then(response => {
this.conversation = response.data
resolve()
})
})
},
# Laravel - get raw SQL with bindings
Sometimes you want to look at the raw sql query that will be executed by Eloquent or Query Builder, so here is a useful snippet just for that.
# PHP
$ids = [100, 104, 12];
$query = \App\Models\Employes::where('id', '>', 100)->where('id', '<', 200)->whereIn('id', $ids);
echo vsprintf(str_replace('?', '%s', $query->toSql()), collect($query->getBindings())->map(function ($binding) {
return is_numeric($binding) ? $binding : "'{$binding}'";
})->toArray());
// Result: select * from "employes" where "id" > 100 and "id" < 200 and "id" in (100, 104, 12)
# Laravel - Raw queries with data binding on different connection
There are 3 things I want to show you in this snippet. 1) How to use raw queries with Laravel 2) How to use data binding for raw queries and 3) how to use a different connection for a raw query. So let's get started.
* Note: Data binding protects us from SQL injection.
# PHP
$replies = DB::connection('mysql_melnalapa')
->select("
SELECT id, reply_text, sending_time
FROM sms_sender_inbox_replies
WHERE phone_number = :phone_number
AND id IN (
SELECT MAX(id)
FROM sms_sender_inbox_replies
GROUP BY phone_number
)
", ['phone_number' => $phone_number]);
$replies = collect($replies);
# Laravel - Enable query log
# PHP
/**
* connection() is optional
*/
\DB::connection('mysql_melnalapa')->enableQueryLog();
// Your database queries...
$queries = \DB::connection('mysql_melnalapa')->getQueryLog();
log_msg($queries);
\DB::connection('mysql_melnalapa')->disableQueryLog();
# Upload file remotely using cURL
Then all you have to do on the server side is handle the file upload as usual. Make sure you allow CORS.
This can also be done using Postman for testing. Simply choose POST, Body, key is the name of the field, change type to file, then upload the file and hit SEND.
curl -H "Content-Type:multipart/form-data" -F "statistics-file=@/home/blacklapa/csv/ML20160401093841.csv" -X POST https://php7.melnalapa.lv/melnalapa5.8.2/public/api/credit24-statistics-upload
# Different class for each implementation
When you need different functionality based on a user type or some other factor, you can seperate their functionality in their own classes.
# PHP
$users = [
(object) [
'id' => 1,
'username' => 'john',
'type' => 'json'
],
(object) [
'id' => 2,
'username' => 'jeff',
'type' => 'plain'
],
];
$outputters = [
'json' => JsonOutputter::class,
'plain' => PlainOutputter::class,
];
class JsonOutputter
{
public function output($user)
{
echo json_encode($user);
}
}
class PlainOutputter
{
public function output($user)
{
echo $user->username;
}
}
foreach($users as $user) {
(new $outputters[$user->type])->output($user);
}
# Hide the API key in the backend
If your API call is made from the client to the API, your key will be exposed. You can’t hide API keys on frontend part of your website. To hide your API key, you can make the call from your frontend to your backend server. And then from your backend server you can make the call to the API with your API key which is securely stored on your backend server.
# Javascript
axios.post('your/backend/api').then(response => {
console.log(response.data)
})
# PHP
// API key securely hidden on the backend
$http = new GuzzleHttp\Client();
$response = $http->request('POST', 'https://www.api-service.com/', [
'form_params' => [
'api_key' => '4554c9ee-45asdf48w-fh5j4c1t9-8e84CC8f7a48-DF54d128zcc'
]
]);
# Laravel make login work across subdomains
By default your authentification will work only for the given domain or one subdomain. This way you can end up in an infinite loop sometimes redirecting back and forth. To make to so that if you login on the main domain and it is already logged in on every subdomain you need to edit
config/session.php and change it like this.
'domain' => '.melnalapa.lv',
# Javascript - deconstruct function object argument
When an object is passed to a function it can be deconstructed like so to pull out the properties / methods we want.
# Javascript
// Before
API.getData().then(response => {
console.log(response.data)
})
// After (Using ES6 object deconstrution)
API.getData().then(({ data }) => {
console.log(data)
})
# Vue.js - multiple layouts
Sometimes you may want to have different layouts for different sections of your web app. This can we done using nested routes.
Then in each layout you can have a router view.
# Javascript
<div id="app">
<router-view></router-view>
</div>
const router = new VueRouter({
routes:
[
{
path: '/',
component: MainLayout,
children:
[
{
path: '',
component: UserHome
},
{
path: 'profile',
component: UserProfile
},
{
path: 'posts',
component: UserPosts
}
]
},
{
path: '/auth',
component: AuthLayout,
children:
[
{
path: 'login',
component: AuthLogin
},
{
path: 'register',
component: AuthRegister
},
]
},
]
})
# Make a request to a server to do anything
That's great from a security standpoint. But what if I wanted to turn on/off lights in my house, open doors, applications on my computer or control my system in any way from the website?
In order to do this, we have to install an additional software on our computer - a server, to communicate with - that has permissions to do all of those things. Then we can make a request to our local server that is running on our machine and ask it to perform a certain task.
For example if we wanted to shutdown our computer when clicking a button on our web application.
First we need to create our server app.
# Javascript
npm init -y
create ap.js with the following content:
const express = require('express');
const childProcess = require('child_process');
const app = express();
app.use((req, res, next) => {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
next();
});
app.all('/shutdown', (request, response) => {
childProcess.execFileSync('shutdown', ['/s', '/f', '/t', 0]);
})
const server = app.listen(8080, () => {})
Then, to make our node server run on windows startup, we need to create .bat file with
node C:\path\to\app.js
And also create a .vbs script file with this content
Set WshShell = CreateObject("WScript.Shell")
WshShell.Run chr(34) & "C:\Program Files\my-new-node-server\nodeserver.bat" & Chr(34), 0
Set WshShell = Nothing
Then put this .vbs script in Start Menu > Startup > (right click) > All users
Finally, we can make a request to our server
axios.post('http://localhost:8080/shutdown')
# Create themes with CSS
# HTML
<html>
<head>
<meta charset="utf8">
<style>
* {
margin: 0;
padding: 0;
}
[data-theme="light"] {
--color-background: #e8e8e8;
--color-foreground: #464646;
}
[data-theme="dark"] {
--color-background: #313131;
--color-foreground: #d4d4d4;
}
.box {
background: var(--color-background);
color: var(--color-foreground);
height: 100vh;
display: flex;
justify-content: center;
align-items: center;
font-size: 20px;
}
.btn {
border-radius: 8px;
margin-left: 20px;
border: 1px solid var(--color-foreground);
padding: 10px;
font-size: 18px;
cursor: pointer;
user-select: none;
}
</style>
</head>
<body>
<div id="app" data-theme="dark">
<div class="box">
<div class="btn">
Mainīt tēmu
</div>
</div>
</div>
<script>
document.querySelector('.btn').addEventListener('click', () => {
const theme = document.querySelector('#app').dataset.theme
if (theme == 'light') {
document.querySelector('#app').dataset.theme = 'dark'
} else {
document.querySelector('#app').dataset.theme = 'light'
}
})
</script>
</body>
</html>
# PHP - make POST request
Make PHP post request without cURL
# PHP
echo file_get_contents('https://ej.uz', false, stream_context_create([
'http' => [
'method' => 'POST',
'header' => "Content-type: application/x-www-form-urlencoded",
'content' => http_build_query([
'key1' => 'Hello world!'
])
]
]));
# Create self-signed SSL for localhost
In order to get localhost to work on https:// we need to install a self-signed SSL certificate.
=========== SUPER EASY Way ===========
1. Open Powershell as Administrator and run these commands
Set-ExecutionPolicy AllSigned Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1')) choco install mkcert mkcert -install mkcert localhost 2. Copy the ./localhost.pem and ./localhost-key.pem to C:\xampp\apache\crt\localhost
3. Edit your C:\xampp\apache\conf\extra\httpd-vhosts.conf and add an entry for 443 with certificates - like this...
<VirtualHost *:443> ServerName www.mydomain.lv DocumentRoot "C:/xampp/htdocs/mydomain.lv" SSLEngine on SSLCertificateFile "crt/mydomain.lv/www.mydomain.lv+1.pem" SSLCertificateKeyFile "crt/mydomain.lv/www.mydomain.lv+1-key.pem" </VirtualHost>
=========== MANUAL Way ===========
1. Create C:\xampp\apache\crt folder
This folder does not exist (likely) so you will need to create it.
2. Add cert.conf file in this folder with this content
[ req ] default_bits = 2048 default_keyfile = server-key.pem distinguished_name = subject req_extensions = req_ext x509_extensions = x509_ext string_mask = utf8only [ subject ] countryName = Country Name (2 letter code) countryName_default = US stateOrProvinceName = State or Province Name (full name) stateOrProvinceName_default = NY localityName = Locality Name (eg, city) localityName_default = New York organizationName = Organization Name (eg, company) organizationName_default = Example, LLC commonName = Common Name (e.g. server FQDN or YOUR name) commonName_default = localhost emailAddress = Email Address emailAddress_default = [email protected] [ x509_ext ] subjectKeyIdentifier = hash authorityKeyIdentifier = keyid,issuer basicConstraints = CA:FALSE keyUsage = digitalSignature, keyEncipherment subjectAltName = @alternate_names nsComment = "OpenSSL Generated Certificate" [ req_ext ] subjectKeyIdentifier = hash basicConstraints = CA:FALSE keyUsage = digitalSignature, keyEncipherment subjectAltName = @alternate_names nsComment = "OpenSSL Generated Certificate" [ alternate_names ] DNS.1 = localhost
3. Add make-cert.bat file in this folder with this content
@echo off set /p domain="Enter Domain: " set OPENSSL_CONF=../conf/openssl.cnf if not exist .\%domain% mkdir .\%domain% ..\bin\openssl req -config cert.conf -new -sha256 -newkey rsa:2048 -nodes -keyout %domain%\server.key -x509 -days 365 -out %domain%\server.crt echo. echo ----- echo The certificate was provided. echo. pause
4. Run make-cert.bat
localhost when prompted. And just do enter in other question since we already set the default from cert.conf.
5. Install the cert in windows.
localhost folder created. In that folder we will have server.crt and server.key. This is our SSL certificate.
Double click on the
server.crt to install it on Windows so Windows can trust it.
And now this cert is installed and trusted in Windows. Next is how how to use this cert in XAMPP.
6. Add the site in Windows hosts
openC:\Windows\System32\drivers\etc\hosts and add this
127.0.0.1 localhost
7. Add the site in XAMPP conf.
We need to enable SSL for this domain and let XAMPP know where we store the SSL Cert. So we need to editC:\xampp\apache\conf\extra\httpd-xampp.conf
And add this code at the bottom:
<VirtualHost *:443>
DocumentRoot "C:/xampp/htdocs"
ServerName localhost
Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains"
SSLEngine on
SSLCertificateFile "crt/localhost/server.crt"
SSLCertificateKeyFile "crt/localhost/server.key"
</VirtualHost>
8. Allow your browser to trust self-signed certificates!
At first when you try to access localhost on https, your browser won't likely trust your self-signed certificate and won't open the site. This last step is to enable settings in your browser to trust your self-signed certificate.Chrome
Navigate to and enablechrome://flags/#allow-insecure-localhost
Firefox
Go toabout:config and set this to false network.stricttransportsecurity.preloadlist
After that, you will need to restart Apache in XAMPP. It’s very simple, simply open XAMPP Control Panel and Stop and re-Start Apache Module. Then restart your browser and Done!
https://localhost/
# Vue.js - Component doesn't update when data changes
Sometimes your component refuses to re-render when it's data changes (especially when working woth global store). To force your component to re-render, we can add a :key="" to the data that doesn't update.
# Javascript
export default {
template: `
<div class="cr-snackbar">
<div class="cr-snackbar-selection" :key="pickedTable">
Table {{ pickedTable }}
</div>
</div>
`,
data()
{
return {
pickedTable: '2',
}
},
mounted()
{
setInterval(() => {
this.pickedTable = '3'
}, 3000)
}
}
# Reverse proxy
So, you can only forward one port to one machine. And if you want your app to run on a different port or different machine, you will need to use a different port. Since port 80 can be forwarded to only one port and one machine.. Frustrating, right?
Reverse proxy to the rescue! Using a reverse proxy you can still forward one port (e.g. 80, 443) to one machine but that one machine will be a reverse proxy server that will forward it to the correct port and machine depending on the hostname!
So if I type in my browswer "john.com" it will come to my router, then my router will forward port 80 to my reverse proxy @ 192.168.0.1:80 and then my revesrse proxy will look at the hostname 'john.com' and foward it to machine 192.168.0.30:80. And if I type in my browswer "martin.com" it will have a rule to go to 192.168.0.10:8080.
Or imagine a scenario when you have 3 apps running on 3 different ports on your machine. You would have to provide a port number to access them like localhost:8080 and locahost:8081 and localhost:8082 - yuk! How about app1 app2 and app3? And let the reverse proxy forward it to the correct port internally! Awesome, clean and simple! :)
You can even proxy Google if you want!
The only thing a user will see is he is navigating to john.com or martin.com on port 80 but will not know from what server and port the website is actually coming from! Super convenient!
# Websocket server on localhost - with remote site
Imagine the scenario - There is a remote web application on a server far far away. In your room there is a RFID scanner and a computer with the remote web app opened. You want to make it so that when you touch your card to the wirelss scanner it logs you into the remote app. AS SOON AS you scan your card. Polling the server every second? Yeah, not a very good idea.
Websockets to the rescue! So what we will do is run a
local websocket server and then connect to it from our remote web app using a localhost connection.
# Node.js
const WebSocket = require('ws')
const server = new WebSocket.Server({
port: 12345
})
server.on('connection', ws => {
setInterval(() => {
ws.send('Random data: ' + Math.random())
}, 2000)
})
# Client Javascript
// Yes, localhost...
const connection = new WebSocket('ws://localhost:12345')
connection.addEventListener('message', event => {
console.log('Received message: ' + event.data)
})
# PM2 - autostart, recovery, watch Node apps
very frustrating to newcomers. But there is a tool out there that solves every one of these problems and more!
PM2 to the rescue! That's right! Plus it runs on every OS and is free! Sounds almost too good to be true! So how do you do this on Windows?
First you need to
pm2 start app.js all your apps that you want to start the next time Windows starts.
Second you need to
pm2 save to save it for autostart
Then you need to add pm2 to Windows auto start (Linux is different). To do this you can install
npm install pm2-windows-startup -g and then run pm2-startup install and it will add a registry entry
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run
wscript.exe "C:\Users\Martins\AppData\Roaming\npm\node_modules\pm2-windows-startup\invisible.vbs" "C:\Users\Martins\AppData\Roaming\npm\node_modules\pm2-windows-startup\pm2_resurrect.cmd"
And that's it, next time your restart your computer - your Node.js apps will start automatically!
# Apache enable mod_rewrite and load .htaccess
You might get 404 Not found for Laravel paths, it means the rewrite is not working correctly. First you need to enable apache rewrite module
sudo a2enmod rewrite && sudo systemctl restart apache2 (Especially for Laravel)
And then in order for apache to load the local .htaccess file you need to allow it.
AllowOverride All
<VirtualHost *:80>
DocumentRoot "/var/www/html/melnalapa/public"
ServerName myapp
ErrorLog "/home/username/errors.log"
CustomLog "/home/username/custom.log" common
<Directory "/var/www/html/melnalapa/public">
Options Indexes FollowSymLinks Includes ExecCGI
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
# Pretty URL rewrite
It is very easy and simple to have URL rewrite. All you need to do is redirect all traffic to the index.php (except actual files and directories) and let your PHP script extract all the information and show the appropriate page.
1. Enable the rewrite mod (Apache)
sudo a2enmod rewrite && sudo systemctl restart apache22. You have two options 1) Load .htaccess file or 2) put the configuration in the virtual host. If you are using .htaccess make sure the servers honors it by putting the AllowOverride All in the directory tag in virtual host.
Apache
RewriteEngine On
// Everything that is not an actual file...
RewriteCond %{REQUEST_FILENAME} !-f
// Everything that is not an actual directory...
RewriteCond %{REQUEST_FILENAME} !-d
// Rewrite everything (.) to index.php and stop [L]
RewriteRule . index.php [L]
Nginx
location / {
try_files $uri $uri/ /index.php?$query_string;
}
PHP
if ($_SERVER['REQUEST_URI'] == '/hello-test/babh') {
echo "It is babh";
} else {
echo "hi";
}
echo $_SERVER['REQUEST_URI'];
print_r($_GET);
(!) Performance
A performance improvement can be achieved by moving the rewrite rules from the .htaccess file into the VirtualHost block of your Apache configuration and then changing AllowOverride All to AllowOverride None in your VirtualHost block.
# Create a local .dev domain using a wildcard TLS certificate
Google bought
.dev domain and decided to put all its domains on their HSTS preload list. Which means that ALL .dev websites can only be accessed using https. Well, that upset a lot of developers that were using the .dev domain for local development. But it shouldn't have, really. Because you can just install a self-signed .dev certificate and make your OS and browser trust self-signed certificates.
It's a little bit more difficult when you have lots of developers and machines, but even then you can create a script that will install the certificate on all your local machines.
Let's do it!
1. Create the certificate. Open Powershell as Admnistrator and run:
P.S. Make sure to install mkcert firstNow, we are going to need to create a certificate with 2 SANs (Subject Alternate Names) because *.martins.dev alone will not match martins.dev (without www or anything else before it). Just seperate them using a single space.
C:\Users\Martins> mkcert martins.dev *.martins.devThis will create a wildcard certificate that will work for all .martins.dev sites
2. Copy the martins.dev+1.pem and martins.dev+1-key.pem
Copy these keys from
C:\Users\Martins to C:\xampp\apache\crt\martins_dev (create the crt folder if it doesn't exist)
3. Add entries in your C:\Windows\System32\Drivers\etc\hosts file
127.0.0.1 martins.dev 127.0.0.1 www.martins.dev
4. Create your virtual host
Open yourC:\xampp\apache\conf\extra\httpd-vhosts.conf and add an entry
<VirtualHost *:443>
DocumentRoot "C:/xampp/htdocs"
ServerName martins.dev
ServerAlias www.martins.dev
# Remove double slash
RedirectMatch 301 ^//(.*)$ http://martins.dev/$1
# Add www
RewriteEngine On
RewriteCond %{HTTP_HOST} ^martins.dev [NC]
RewriteRule ^(.*)$ https://www.martins.dev/$1 [L,R=301]
# Always use TLS
Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains"
SSLEngine on
SSLCertificateFile "crt/martins_dev/martins.dev+1.pem"
SSLCertificateKeyFile "crt/martins_dev/martins.dev+1-key.pem"
</VirtualHost>
5. Open XAMPP Control Panel and Restart Apache
6. (Extra) step - make your OS & browser trust self-signed certificates, otherwise it won't work (google it)
Basically, you have to add the certificate to your browsers' trusted root certificate authorities.For Chrome, you will have to add the certificate to
Settings > Manage Certificates > Trusted Root Certification Authorities. If you're doing this on a second computer, then first go to Trusted Root Certification Authorities, find the mkcert certificate and press Export and then Import it on the second computer.
For Firefox it is similar. Open
Preferences > Certificates > View Certificates > Import > Trust this CA to identify websites
I suppose for Windows, you can find a script to add the certificate into trusted root certificates. That way you won't have to touch every browser.
Restart your browser and you're all done!
# Laravel Queue Worker
Sometimes we want to run some task in the background later so the user doesn't have to wait. We can do this using the Laravel Queue Worker! Here is how.
1. Create the jobs table
You can get the table by runningphp artisan queue:table that will create a migration that you can run using php artisan migrate
2. Set up the config
If you want to specify a different database connection than your default, you can editconfig/queue.php
'database' => [
'connection' => 'OTHER_DB_CONNECTION',
'driver' => 'database',
'table' => 'jobs',
],
3. Create your job
php artisan make:job SendEmailAboutOrderWithDiscount // To dispatch (add to the queue) your job SendEmailAboutOrderWithDiscount::dispatch()
4. Change in .env QUEUE_DRIVER (or QUEUE_CONNECTION) from sync to database
5. Install supervisor
Let's install supervisor that will allow us to run our worker in the background and auto-start it with the OS and restart it if it crashes.sudo apt-get install supervisor
6. Configure supervisor
Now we need to add our supervisor script for our laravel queue workerCreate a file /etc/supervisor/conf.d/laravel-worker.conf with this content (make sure you enter your username)
[program:laravel-queue-worker] process_name=%(program_name)s_%(process_num)02d command=php /var/www/html/my_laravel_app/artisan queue:work autostart=true autorestart=true user=YOUR_LINUX_USERNAME numprocs=3 redirect_stderr=true stdout_logfile=/home/YOUR_LINUX_USERNAME/laravel-queue-worker.log
Now run these commands and you're all done!
sudo supervisorctl reread sudo supervisorctl update sudo supervisorctl start laravel-queue-worker:* sudo supervisorctl restart all
# How to create a sound server
If you need to be able to push a button in one room and then have a sound go off in another room - you can set up a Raspberry Pi server in the other room with a speaker connected that will listen for an incoming request and play a sound.
This can be done over the internet or local LAN router.
# How to output the contents of a file in binary, hex, decimal, ascii
// Output file in binary xxd -b filename // Output file in decimal od -t u1 filename // Output file in hexadecimal hexdump -C filename // Output file in ascii cat filename
# How to install TLS certificate for Mikrotik
Open terminal and run (2 commands):
/certificate add name=LocalCA common-name=LocalCA key-usage=key-cert-sign,crl-sign /certificate sign LocalCA
Now, export this LocalCA
/certificate export-certificate LocalCA
Now Open Files (left side menu), right click on your certificate and click "download"
Now, install your LocalCA on your computer/browser
Open your LocalCA and click "Install certificate..." or open your browser certificates settings and import it there (in trusted root certificate authorities).
2. Generate your certificate and sign it using your LocalCA root CA
For an IP address (if you want to access it like https://192.168.1.1)
/certificate add name=Webfig common-name=192.168.1.1 subject-alt-name=IP:192.168.1.1 /certificate sign Webfig ca=LocalCA
For an Hostname/Domain address (https://mymikrotik/)
/certificate add name=Webfig common-name=mymikrotik subject-alt-name=DNS:mymikrotik /certificate sign Webfig ca=LocalCA
# Install LAMP with phpMyAdmin (Ubuntu 18.04)
Install Apache
$ sudo apt update && sudo apt install apache2 -y
Allow incoming HTTP and HTTPS traffic for Apache
$ sudo ufw allow in "Apache Full" && sudo ufw allow ssh
Install MySQL
$ sudo apt install mysql-server -y
Set root password for MySQL
$ sudo mysql_secure_installation
Enable MySQL root user to login using password (Otherwise, it won't work for phpMyAdmin)
$ sudo mysql mysql> UNINSTALL COMPONENT 'file://component_validate_password'; mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'mySuperStr00ng_P@sw1d!'; mysql> FLUSH PRIVILEGES; mysql> exit
Install PHP
$ sudo apt install php libapache2-mod-php php-mysql openssl php-common php-curl php-json php-mbstring php-mysql php-xml php-zip php-bcmath unzip php-imagick # Enable apache rewrite module $ sudo a2enmod rewrite && sudo systemctl restart apache2
# Open /etc/php/7.4/apache2/php.ini and make sure these settings are set like this
error_reporting(E_ALL);
ini_set('display_errors', '1');
Install phpMyAdmin
$ sudo apt install phpmyadmin
# Open /etc/apache2/apache2.conf and add this line at the bottom # phpMyAdmin Configuration Include /etc/phpmyadmin/apache.conf
Set proper permissions for /var/www
$ sudo adduser $USER www-data $ sudo chown $USER:www-data -R /var/www $ sudo chmod u=rwX,g=srX,o=rX -R /var/www
Restart Apache
$ sudo systemctl restart apache2
Enable root login over SSH
$ sudo nano /etc/ssh/sshd_config - Add this line at the bottom: PermitRootLogin yes $ systemctl restart sshd
Add yourself to sudo group
$ sudo usermod -aG sudo $USER
Set up sendmail to be able to send emails
# Change your host name to something like my-domain.xx $ sudo nano /etc/hostname $ sudo apt-get install sendmail $ sudo sendmailconfig # Finally, add SPF DNS entry for your domain name, so that other email providers won't reject your emails. If an entry already exists, you can add the IP from which the sendmail will be allowed to send emails at the end like so: v=spf1 ip4:91.203.68.167 a mx ip4:91.203.68.160/27 +ip4:85.31.97.34 +ip4:81.198.65.19 -all
# Linux send and receive emails
Set up to send emails
$ sudo apt-get install sendmail $ sudo nano /etc/hostname Change your host name to something like my-domain.xx $ sudo sendmailconfig Finally, add SPF DNS entry for your domain name, so that other email providers won't reject your emails. If an entry already exists, you can add the IP from which the sendmail will be allowed to send emails at the end like so: v=spf1 ip4:91.203.68.167 a mx ip4:91.203.68.160/27 +ip4:85.31.97.34 +ip4:81.198.65.19 -all
DKIM (optional)
It might also be a good idea to set up DKIM to get even better deliverability.Set up to receive emails
1. Add an MX DNS record in your domain Domain: yourdomain.com Type: MX Target yourdomain.com 2. Install postfix $ sudo apt install postfix Also make sure that it is listening on all interfaces (not only localhost) 3. Open / Forward port 25 on your router's firewall
# Force Google.com to be in English
It's better to change your Google's language to English and also very important to go to Settings on the bottom of the page and change your region to United States.
# Check if any process is listening to a port
Also, this way we can check if this port is open.
$ sudo netstat -anp | grep 443
# Linux listen to a port
$ nc -l 443
# Clear/flush DNS cache
1. Clear browser cache (stored for 1 minute)
chrome://net-internals/#dns
1. Clear OS cache (stored for 1 day)
ipconfig /flushdns
1. Clear router cache (stored for 7 days or TTL)
Mikrotik > IP > DNS > Cache > Flush Cache
# Free TLS encryption using Cloudflare
1. Create a Cloudflare account with your domain 2. Set Cloudflare SSL settings to Flexible SSL so it will proxy https:// over TLS to http:// over plain HTTP 3. In your virtual host you just need an entry for port 80 and that's all, Cloudflare takes care of the rest!
# Redirect non-www to www and http to https (Apache Windows)
<VirtualHost *:80>
ServerName erglasparni.lv
RewriteEngine On
RewriteRule ^(.*)$ https://www.%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
</VirtualHost>
<VirtualHost *:80>
ServerName www.erglasparni.lv
RewriteEngine On
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
</VirtualHost>
<VirtualHost *:443>
ServerName erglasparni.lv
SSLEngine on
SSLCertificateFile "crt/erglasparni.lv/www.erglasparni.lv+1.pem"
SSLCertificateKeyFile "crt/erglasparni.lv/www.erglasparni.lv+1-key.pem"
RewriteEngine On
RewriteRule ^(.*)$ https://www.%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
</VirtualHost>
<VirtualHost *:443>
ServerName www.erglasparni.lv
DocumentRoot "C:/xampp/htdocs/erglasparni"
SSLEngine on
SSLCertificateFile "crt/erglasparni.lv/www.erglasparni.lv+1.pem"
SSLCertificateKeyFile "crt/erglasparni.lv/www.erglasparni.lv+1-key.pem"
</VirtualHost>
127.0.0.1 erglasparni.lv 127.0.0.1 www.erglasparni.lv
# Linux list all USB devices
1. ls /dev/* | grep USB $ udevadm info -a -n /dev/ttyUSB0 - to get info about a device 2. lsusb 3. dmesg 4. usb-devices 5. lsblk
# How Google handles load balancing
The thing I didn't know about was a technology called Anycast and GeoDNS and Round Robin DNS. These are types of DNS load balancing. And it seems like Google is using all 3 of these technologies.
Here is how a GeoDNS load balancing would work.
www.google.com - this website is being hit billions of times every day by billions of users from all over the world. How does it load balance itself?Well, Google owns its own DNS authoritative name servers (which means it can return whatever IP address it wants each time), so it first determines the location of your IP and then spits out a random A address (or multiple addresses) of their servers near you. (GeoDNS)
That's why every time you do a DNS lookup for www.google.com (or from different locations world-wide) you get a different "A IPv4" address (or multiple addresses). That's how they balance the load, and then from there they can balance it even further. It's like a "Round Robin DNS" on steroids.
Here is how a Anycast load balancing would work.
We have always been taught the one fundamental law of networking - that every computer on the internet needs to have a single unique IP address, otherwise how else will it know where to deliver a packet.. right..?Well, that might have been a lie. The trick is to give all your front load balancing servers the same IP address and have the internet routers choose the closest and fastest route to it.
So when you type in your browser www.google.com everybody is given a single IP address but internet routers will route it to the server that is closest to you. But another person in another country will be routed to a different server (although with the same IP address). These are called Points of Presence (PoP).
# Find Anycast IP location
To find the geographical location of an anycast IP we can use traceroute and check geo location of the last n number of IPs. That way we can know where the request is being routed.
# Web scraping with authentication using cookies
Using cURL we can specify the headers that we want to send, including cookies. Make sure you specify the right cookie format for the HTTP version of the server. It may be "cookie: " or "Cookie: " or even ":cookie: " - so keep that in mind. You can examine the request headers using Chrome Devtools.
$request = [];
$request[] = "cookie: f5avraaaaaaaaaaaaaaaa_session_=
NOPEJKHAKDFEFLBHCHCMPPJAGIFALCLLCPBFMLOEGDKDEGEJBCHKJD
AIAFLNBAFMMACDINODMHBLMAKPOGKALCHCFNKDCAJEIOGEJFIPIDCN
JHLFAILKMMCMBOKCOPBK; f5avraaaaaaaaaaaaaaaa_session_=A
KMEHOCLKDIPFLIEFNNLADKDFNLAKNGLMJFJIKLJAKNFCDKIEKHHCLAM
FONHFMBODBIDNGNHNHBBEHFCIBMAKEEAFNJODKPHIIGBDBKDLIINLCC
FCOKFKLBPJPGOLIBF; _ga=GA1.2.2072835136.1585895363; _gi
d=GA1.2.881662520.1585895363; _fbp=fb.1.1585895407150.1
658875119; XSRF-TOKEN=eyJpdiI6Ik5wbUZVNmJPMXhuOUlKRGpoS
GIyMFE9PSIsInZhbHVlIjoidm5icTNNamZJS0dsXC9Jek9TY3djbXl1
NFR3NFlIZTBhajNkRURmSFEwMWVhc2loN1pHKzdIQzlMWXZhWmFBVlc
iLCJtYWMiOiI1YWIwZWQyMDc3N2Y0Zjg2ODBkZWYxODIwNWE2ZWQyNW
U5ZGQ4NTBjMTEwOTU5MWJlNWI0Y2JkNTk2Y2Y3MTYzIn0%3D; rimi_
storefront_session=eyJpdiI6IitqVkJGYnZROVVVUHlJV1haVFds
UVE9PSIsInZhbHVlIjoiZXpPTVB5Wkd4SjdRZkE2UnlNZ0c5T2U3R1V
1dFJ1SEY4dkdrNGJ4aUdOb0U1c0J1MTliYzR0MlVabGVUUUZheiIsIm
1hYyI6ImJjNmNmMjZiMzg0ODkwNWU5NTY3YTM5Yzk5MTZmM2JkZTRkZ
mMzYjhmYmUwNGVhNWE3NzIzZGMxMjYxNTcyOWQifQ%3D%3D; TS01d5
211d=018cbed938e7b178829b4671c0db321d5bc9a385c3777b76ef
9a18901160b816e295439262a20806accb9386c4b7b2d9321f3b677
1ff9c73589dbfb9bb777bdb3d7688dc5541bdc60e43e8b4b18114f7
330693f54422f2ac010dfa3b18814c0c969f432317";
$curl = curl_init('
https://www.rimi.lv/e-veikals/lv/checkout/reserve/time-slots
');
curl_setopt($curl, CURLOPT_HTTPHEADER, $request);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, TRUE);
$result = curl_exec($curl);
$html = new DOMDocument();
@$html->loadHTML($result);
$xpath = new DOMXPath($html);
$nodes = $xpath->query("//label[@class='gtm']");
foreach ($nodes as $node) {
echo $node->nodeValue;
}
# Free e-mail hosting
One of the best ones I have found is mail.ru - it offers unlimited space and you can use any client as it supports IMAP and POP protocols.
You will just need to set up your DNS MX and TXT (SPF, DKIM) records. Then you can use the default Mail.ru web interface or any other client you wish (even Gmail).
# Monitor other Linux user commands
sudo apt intall sysdig sysdig -c spy_users
# Linux get total size of directory
sudo du -sh /var/www // Get all directories size sudo du -sh /var/www/* or sudo du -sh $(ls -A /home/martins/) | sort -h (to include hidden directories + sort) 2.3G /var/www/html/burger_studio 1.7G /var/www/html/cepiens 4.5G /var/www/html/factory 1.8G /var/www/html/hacapurija 12K /var/www/html/index.html 2.8G /var/www/html/karbonadexxl 1.7G /var/www/html/leningrad 1.5G /var/www/html/moods 2.6G /var/www/html/muca 3.2G /var/www/html/pizzatime 4.0K /var/www/html/production 3.6G /var/www/html/rest 1.6G /var/www/html/salt 1.7G /var/www/html/trisrebes
# Let's Encrypt Certbot wildcard certificates
$ certbot certonly --text --agree-tos --manual-public-ip-logging-ok --renew-by-default --email [email protected] --manual --preferred-challenges dns -d '*.yourcooldomain.com' -d yourcooldomain.com # You might need to add to crontab to renew... 0 1 * * * /usr/bin/certbot renew >> /var/log/letsencrypt/renew.log
# Letting your app handle all URLs and domains
Then, point all requests to your site's index.php page and let it handle it all.
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.php [L]
Route::domain('sub.domain.com')->group(function ()
{
Route::post('/', 'HomeController@index');
});
You can even dynamically match the subdomain like this...
Route::domain('{account}.myapp.com')->group(function () {
Route::get('user/{id}', function ($account, $id) {
//
});
});
# How Laravel's Route Model Binding works under the hood
class User
{
public function where($column, $value)
{
$this->id = $value;
if ($this->id == 1) $this->username = 'Martins';
if ($this->id == 2) $this->username = 'Janis';
return $this;
}
}
class HomeController
{
public function index(User $user)
{
print_r($user);
}
}
$id = $_GET['user'] ?? 1;
// $method = 'HomeController@index';
$method = new ReflectionMethod('HomeController', 'index');
foreach ($method->getParameters() as $parameter) {
$class = $parameter->getClass()->name;
}
$model = new $class;
$model = $model->where('id', $id);
// Inject model into controller function
$method->invokeArgs(new HomeController, [
$model,
]);
/**
* We are basically doing something like this,
* except we are doing it dynamically.
*/
$model = new User;
$model->where('id', $id);
$HomeController = new HomeController;
$HomeController->index($model);
# Avoid if-else statements (Replace Conditional with Polymorphism)
if ($type == 'created_thread') {
echo "He created a thread"
}
if ($type == 'created_reply') {
echo "He replied to a thread"
}
// Instead try this
include "partials/$type"
// Another example
$array = [
'Customer' => Martin\Repository\CustomerService::class,
'Driver' => Martin\Repository\Driver::class,
]
$user_type = 'Customer';
$user = new $array[$user_type];
$user->login;
$user->register;
# Keeping controllers slim
class PostController extends Controller
{
public function update(UpdatePostForm $form)
{
// Authorize the action
request()->authorize();
// Validate the request
request()->validate([
'title' => 'required',
'body' => 'required',
]);
// Update post
Post::update(request()->all());
// Send notification email
Mail::send([
'to' => '[email protected]',
'title' => 'Title',
'body' => 'Body',
])
// Notify users
Users::notify();
// Add item to newsletter
// Print pdf
// ....
}
}
class PostController extends Controller
{
public function update(UpdatePostForm $form)
{
$form->save();
event(new PostUpdated); // for side effects :)
// Or just call service classes
Mail::send();
Users::notify();
}
}
# Deploy with rsync
# First add an ssh key so you won't need to enter your password: $ ssh-keygen $ ssh-copy-id [email protected] -p 1234 # This adds the generated key to host's ~/.ssh/authorized_keys > For windows you can manually copy the key in id_rsa.pub file to ~/.ssh/authorized_keys (on a new line) # Enable Cloudflare developer mode :) # Now, to copy from local to remote the contents of a folder to remote folder's contents (important trailing slash) $ rsync -arv -e 'ssh -p 1234' --progress --delete /home/martins/Documents/todo/dist/ [email protected]:/home/martins/Server/webserver/var/www/todo.martinsz.lv/ # You can also assign it as an alias $ alias deploy="rsync -arv -e 'ssh -p 1234' --progress --delete /home/martins/Documents/todo/dist/ [email protected]:/home/martins/Server/webserver/var/www/todo.martinsz.lv/" # Now you can do this $ npm build && deploy # You could also set up rsync to automatically watch for file changes and sync with remote
# For Windows you can use the WinSCP Synchronization and "Keep remote directory up to date" features to sync.
# Set up SSH config for alias without password
Windows (!)
$ Error: Bad owner or permissions on C:\Users\Martins\.ssh\config
To get this to work on Windows you need to do 2 things. First make sure that your config file is owned and has permissions ONLY the user from which you will be using SSH. Remove all others. Disable inheritance.
Second, your computer name CANNOT have the same name as your user account so you will need to rename it. Rename John/John to something like John/JohnPC
Host martins alias2 alias3
HostName ssh.martin...
User martins
Port 12345678
IdentityFile C:\Users\Martins\.ssh\id_rsa
Host martinsz martinsz.lv
HostName ssh.mart...
User martins
Port 12345678
IdentityFile C:\Users\Martins\.ssh\id_rsa
Host base baasealias2
Hostname base.ml.lv
User bl...
Port 12345
IdentityFile C:\Users\Martins\.ssh\id_rsa
Host github.com-personal
HostName github.com
User git
IdentityFile C:\Users\Martins\.ssh\id_rsa_github_personal
Host github.com-work
HostName github.com
User git
IdentityFile C:\Users\Martins\.ssh\id_rsa_github_work
ssh base
git clone [email protected]:vuejs/vue.git or git clone [email protected]:vuejs/vue.git
# Execute a command on remote server
$ ssh [email protected] -p 1234 "touch ~/somefile.txt" $ cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
# Backup and Restore all mysql databases in docker
$ docker exec mysql /usr/bin/mysqldump -u root --password=musyperstrongpassword123 --all-databases > /home/martins/Server/mysql-backup.sql
$ cat /home/martins/Server/mysql-backup.sql | docker exec -i mysql /usr/bin/mysql -u root --password=musyperstrongpassword123
# Transform any function to debounced
# Javascript
export default {
methods:
{
async fetchMembers(at)
{
const result = await axios.get(`/api/users?name=${at}`)
this.members = result.data
},
},
created()
{
this.fetchMembers = _.debounce(this.fetchMembers, 500)
},
}
# Set up Laravel Sanctum for client/server API Token authentication
# After installing a new Laravel app... $ composer require laravel/sanctum $ php artisan vendor:publish --provider="Laravel\Sanctum\SanctumServiceProvider" $ php artisan migrate
# 1. Add Sanctum's middleware to your api middleware group within your app/Http/Kernel.php file
'api' => [
\Laravel\Sanctum\Http\Middleware\EnsureFrontendRequestsAreStateful::class,
'throttle:api',
\Illuminate\Routing\Middleware\SubstituteBindings::class,
],
# 2. Add HasApiTokens trait to User model
use Laravel\Sanctum\HasApiTokens;
class User extends Authenticatable
{
use HasApiTokens, HasFactory, Notifiable;
}
In RouteServiceProvider.php
public function boot()
{
$this->configureRateLimiting();
$this->routes(function () {
Route::middleware('api')
->namespace($this->namespace)
->group(base_path('routes/api.php'));
Route::middleware('web')
->namespace($this->namespace)
->group(base_path('routes/web.php'));
});
}
'paths' => ['*'], 'allowed_methods' => ['*'], 'allowed_origins' => ['*'], 'allowed_origins_patterns' => [], 'allowed_headers' => ['*'], 'exposed_headers' => [], 'max_age' => 0, 'supports_credentials' => true,
namespace App\Http\Controllers;
use Exception;
use App\Models\User;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Hash;
use Illuminate\Support\Facades\Auth;
class AuthController extends Controller
{
public function login(Request $request)
{
try {
$credentials = request(['email', 'password']);
if (!Auth::attempt($credentials))
{
return response()->json([
'status_code' => 500,
'message' => 'Unauthorized'
]);
}
$user = User::where('email', $request->email)->first();
if (!Hash::check($request->password, $user->password, [])) {
throw new Exception('Error in Login');
}
$token = $user->createToken('authToken')->plainTextToken;
return response()->json([
'status_code' => 200,
'access_token' => $token,
'token_type' => 'Bearer',
]);
} catch (Exception $error) {
return response()->json([
'status_code' => 500,
'message' => 'Error in Login',
'error' => $error,
]);
}
}
}
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Route;
use App\Http\Controllers\AuthController;
Route::post('login', [AuthController::class, 'login']);
Route::middleware('auth:sanctum')->get('/user', function (Request $request) {
return $request->user();
});
async fetchData()
{
// Login
let result = await axios.post('http://localhost:8000/login', {
email: '[email protected]',
password: '123456',
})
console.log(result.data)
// Get user data
let options = {
headers: {
authorization: 'Bearer ' + result.data.access_token
}
}
result = await axios.get('http://localhost:8000/user', options)
console.log(result.data)
},
# Make web app load in full screen
<meta name="mobile-web-app-capable" content="yes">
<meta name="apple-mobile-web-app-capable" content="yes">
document.addEventListener('click', enterFullscreen)
function enterFullscreen() {
let element = document.documentElement
if (!this.fullscreenEnabled()) {
element.requestFullscreen()
}
}
# Add alias for cmd in Windows
function commit ($message) {
git add .
git commit -m $message
git push
}
DOSKEY commit=git add . $T git commit -m $1 $T git push
# Hide menu when clicking outside browser window
<div class="el" tabindex="0" onblur="console.log('blurred')">
This will receive focus
</div>
<button>Click me</button>
<script>
document.querySelector('button').addEventListener("click", () => {
document.querySelector('.el').focus()
})
</script>
# Useful aliases
function commit ($message) {
git add .
git commit -m $message
git push
}
function pull () {
git pull
}
function dcu () {
docker-compose up -d
}
function dcd () {
docker-compose down
}
function dcb () {
docker-compose up -d --build
}
function dcls () {
docker container ls -a
}
function dcl () {
docker-compose logs --follow
}
function dp () {
docker system prune -a
}
function drc () {
docker rm -f $(docker ps -a -q)
}
function dri () {
docker rmi -f $(docker images -a -q)
}
function drv () {
docker volume rm $(docker volume ls -q)
}
commit() {
git add . && git commit -m "$1" && git push
}
alias pull='git pull'
alias dcu='docker-compose up -d'
alias dcd='docker-compose down'
alias dcb='docker-compose up -d --build'
alias dcls='docker container ls -a'
alias dcl='docker-compose logs --follow'
alias dp='docker system prune -a'
alias drc='docker rm -f $(docker ps -a -q)'
alias dri='docker rmi -f $(docker images -a -q)'
alias drv='docker volume rm $(docker volume ls -q)'
alias myip='echo $(dig +short myip.opendns.com @resolver1.opendns.com)'
# Port range
# Refactoring - just write it out
# Stackblitz.com
https://stackblitz.com/# Useful Vue3 functions
https://github.com/antfu/vueuse useInterval useIntervalFn useTimeout useTimeoutFn useTimestamp useClipboard useFullscreen useWebSocket onClickOutside useIdle useWindowSize useLocalStorage useDebounce useDebounceFn useThrottle useThrottleFn https://github.com/microcipcip/vue-use-kit useKey useSearchParams
# Vue 3 auto update props easily
const message = computed({
get: () => props.modelValue,
set: (value) => emit('update:modelValue', value)
})
# Nginx cache fonts (if not by default)
location ~* \.(?:eot|woff|woff2|ttf|svg|otf) {
access_log off;
log_not_found off;
expires 5m;
add_header Cache-Control "public";
add_header Access-Control-Allow-Origin *;
types {font/opentype otf;}
types {application/vnd.ms-fontobject eot;}
types {font/truetype ttf;}
types {application/font-woff woff;}
types {font/x-woff woff2;}
}
# Fetch API error handling
fetch('https://pokeapi.co/api/v2/pokemon/a*b/') // 404 error
.then(response => {
if (!response.ok) {
throw Error(response.statusText)
}
return response.json()
}).then(data => {
console.log(data)
}).catch(error => console.error(error))
# Making a PWA with literally 1 line of code
https://medium.com/javascript-in-plain-english/making-a-pwa-with-literally-1-line-of-code-106a0e9405c8# Same Virtual Host - multipile locations
# LARAVEL seems to need that trailing slash /foo/
location /foo/ {
proxy_pass http://localhost:3200/; # note the trailing slash!
}
or (maybe?)
location /foo {
proxy_pass http://localhost:3200/;
}
or if there is a problem with decoded urls then
location /foo {
rewrite /foo/(.*) /$1 break;
proxy_pass http://localhost:3200;
proxy_redirect off;
proxy_set_header Host $host;
}
# NGINX - different server based on HTTP headers
map $http_accept $upstream {
default 192.168.100.1:8000;
~.*text/html.* 1922.168.1.100:45961;
}
location / {
proxy_pass http://$upstream;
}
# We can also chain if statements like this..
# If a request is for host "example.com" AND the source ip address is 192.168.100.1, return a different home page.
# Test the host name, and assign the home page filename value to the $map_link variable
map $host $map_link {
default "index.html";
"example.com" "index_new.html";
}
# Test the source address, and if it matches the relevant address, interpolate the value assigned from the previous map
map $remote_addr $index_page {
default "index.html";
"192.168.100.1" "${map_link}";
}
location / {
....
index $index_page;
....
}
# Windows 10 always show login password field
In order for the Windows 10 lock screen to always show the login password field you must Enable the Do not display the lock screen option in group policy. Open "Edit group policy" and go to - Computer Configuration > Administrative Templates > Control Panel > Personalization > Do not display the lock screen (Enable it)# Send a plain TCP packet using telnet
$ telnet 192.168.1.165 9100 $ type your text here (hit enter) or... you can even replace curl $ telnet 192.168.1.100 80 $ GET /test-endpoint HTTP/1.0 $ hit enter $ hit enter to send
# failiem.martinsz.lv/upload
# Chrome PDF reader dark mode
var cover = document.createElement("div");
let css = `
position: fixed;
pointer-events: none;
top: 0;
left: 0;
width: 100vw;
height: 100vh;
background-color: #67645e;
mix-blend-mode: difference;
opacity: .5;
z-index: 1;
`
cover.setAttribute("style", css);
document.body.appendChild(cover);
# Linux create a swap partition
#Check free memory before free -m mkdir -p /var/_swap_ cd /var/_swap_ #Here, 1M * 2000 ~= 2GB of swap memory. Feel free to add MORE dd if=/dev/zero of=swapfile bs=1M count=2000 chmod 600 swapfile mkswap swapfile swapon swapfile #Automatically mount this swap partition at startup echo "/var/_swap_/swapfile none swap sw 0 0" >> /etc/fstab #Check free memory after free -m
# Ubuntu boot and startup sequence
- Kernel boots -- Kernel launches /usr/sbin/init (symlink to SysVinit, upstart or systemd) --- systemd executes /usr/lib/systemd/system/default.target ---- default.target executes /etc/systemd/system/display-manager.service ----- display-manager.service is a symlink to /lib/systemd/system/gdm3.service ------ gdm3.service us a symlink to /lib/systemd/system/gdm.service ------- gdm.service executes /usr/sbin/gdm3 -------- gdm3 starts the X server and after the user logs in executes /usr/share/xsessionis/ubuntu.desktop --------- ubuntu.desktop executes /usr/bin/gnome-shell
# Fix Trello to look and function better
// Wider
document.querySelectorAll('.js-list.list-wrapper').forEach(el => {
el.style.width = "630px"
});
document.querySelectorAll('.list-card.js-member-droppable.ui-droppable').forEach(el => {
el.style.maxWidth = "100%"
});
// Formatting options
document.querySelector('#banners').style.display = 'none';
document.querySelector('[data-desktop-id="header"]').style.display = 'none';
document.querySelector('.board-header.u-clearfix.js-board-header').style.overflow = 'hidden';
document.querySelector('.board-header.u-clearfix.js-board-header').style.height = '0px';
document.querySelector('.board-header.u-clearfix.js-board-header').style.opacity = 0;
document.querySelectorAll('.list-card-title.js-card-name').forEach(el => {
el.innerHTML = el.innerHTML.replace(/\*\*(.*)\*\*/g, "<b>$1</b>");
el.innerHTML = el.innerHTML.replace(/__(.*)__/g, "<u>$1</u>");
el.innerHTML = el.innerHTML.replace(/~~(.*)~~/g, "<i>$1</i>");
});
// Sub items
var cards = document.querySelectorAll("a.list-card.js-member-droppable.ui-droppable");
var subCard;
for (var i = 0; i < cards.length; i++) {
if (cards[i].innerText.startsWith('--')) {
subCard = cards[i];
subCard.style.marginLeft = '25px';
let element = subCard.querySelector('.list-card-title.js-card-name')
element.innerText = element.innerText.replace('--', '')
}
}
// Done list transparency
var allLists = document.querySelectorAll('.list.js-list-content')
allLists.forEach(item => {
if (item.innerHTML.includes('>Done</h2>')) {
let elements = item.querySelectorAll('.list-card')
elements.forEach(item => {
item.style.opacity = 0.7
})
}
})
// Add time <30 min>
var allCards = document.querySelectorAll('.list-card.js-member-droppable.ui-droppable')
allCards.forEach(item => {
item.style.position = 'relative';
})
var allCardTitles = document.querySelectorAll('.list-card-title.js-card-name')
allCardTitles.forEach(item => {
var regex = /(<.*>)/gm;
var str = item.innerText;
var m;
var foundRegex;
while ((m = regex.exec(str)) !== null) {
if (m.index === regex.lastIndex) {
regex.lastIndex++;
}
m.forEach((match, groupIndex) => {
foundRegex = match
});
}
if (foundRegex) {
// remove time from card itself as it will move to top
var regToReplace = foundRegex.replace('<', '<');
regToReplace = regToReplace.replace('>', '>');
item.innerHTML = item.innerHTML.replace(regToReplace, '');
foundRegex = foundRegex.replace('<', '');
foundRegex = foundRegex.replace('>', '');
// add time element
var parent = item.closest('a.list-card.js-member-droppable.ui-droppable')
parent.insertAdjacentHTML('beforeend', `
<div style="
position: absolute;
top: -3px;
right: 9px;
background: #ebecf0;
border-radius: 0px 0px 10px 10px;
padding: 3px 12px;">
${foundRegex}
</div>
`);
}
})
// Add percentage %90%
var allCardTitles = document.querySelectorAll('.list-card-title.js-card-name')
allCardTitles.forEach(item => {
var regex = /(%.*%)/gm;
var str = item.innerText;
var m;
var foundRegex;
while ((m = regex.exec(str)) !== null) {
if (m.index === regex.lastIndex) {
regex.lastIndex++;
}
m.forEach((match, groupIndex) => {
foundRegex = match
});
}
if (foundRegex) {
// remove %% from card itself as it will move to top
item.innerText = item.innerText.replace(foundRegex, '');
foundRegex = foundRegex.replace('%', ''); // 90
var percentage = parseInt(foundRegex)
var percentageBar = parseInt((percentage * 71) / 100)
// add percent bar
var parent = item.closest('.list-card-details.js-card-details')
parent = parent.querySelector('div.list-card-labels.js-card-labels')
if (parent) {
parent.insertAdjacentHTML('beforeend', `
<span style="
font-size: 12px;
color: #808080d1;
padding-left: 5px;">
<span
class="card-label card-label-red mod-card-front"
style="
background: #e2e2e2;
max-width: none;
width: auto;
width: ${percentageBar}%;">
<span class="label-text"> </span>
</span>
<span style="position: relative;top: -6px;">${percentage}%</span>
</span>
`);
}
}
})
# Linux create SWAP partition easily
#Check free memory before free -m mkdir -p /var/_swap_ cd /var/_swap_ #Here, 1M * 2000 ~= 2GB of swap memory. Feel free to add MORE dd if=/dev/zero of=swapfile bs=1M count=2000 chmod 600 swapfile mkswap swapfile swapon swapfile #Automatically mount this swap partition at startup echo "/var/_swap_/swapfile none swap sw 0 0" >> /etc/fstab #Check free memory after free -m
# ngrok to expose a server behind a firewall (webhooks etc.)
[note: cloudflare is even better], also see - https://techbitz.dev/how-to-open-your-local-dev-app-on-your-phoneNo outside access to your PC because it's behind a firewall and you have no control over it? Try ngrok! It will expose your server to the outside world!
# Linux reverse search with Ctrl + R (and hstr)
Looking for a command you forgot somewhere in history? sudo add-apt-repository ppa:ultradvorka/ppa && sudo apt-get update && sudo apt-get install hstr && hstr --show-configuration >> ~/.zshrc && source ~/.zshrc
# Ways to create bootable usb drives
- Ventoy (best) - use this to simply drop iso images on your drive
- Etcher (second best)
- Virtual CD device like Zalman (third best)
- Unetbootin
- Ubuntu Image Creator
# Run everything in Docker
You should run everything in Docker, often even the simplest things can be messy if run locally. You may have multiple php versions installed with strange defaults.
Instead you can set up an nginx reverse proxy locally then you will be able to do run
http://martins.arkbauer.com:8380/# Use Postman console to see raw http requests and responses
You can open up the Postman Console and there you will be able to see raw http requests and responses like
HTTP/1.1 200 OK Date: ....
# Simplify Laravel CRUD controllers
This is great for sharing the form and template in your create and edit views.Simplify CRUD controllers (local link)
https://talltips.novate.co.uk/laravel/simplify-laravel-crud-controllers
# How do open source maintainers pick which contributors to “invest” in (time, effort, mentorship, etc)
I don’t know about others but for me the main thing isn’t coding skill. The main thing I’m looking for in a contributor is good judgement. This concept may sound fuzzy...When I see “I tested on three resolutions in three browsers and went through scenarios X, Y and Z” (or equivalent that makes sense for the project) my heart fills with joy. This person knows I’ll have to do this anyway and they’ve shown the courtesy of doing it first. Thanks.
This doesn’t mean they can’t screw up. All of us can! But they take enough diligence that the mistakes feel earned. There’s a difference between something slipping through and literally not bothering to chrck whether the change does the thing. Be your own QA and I’ll trust you.
This might sound ungrateful, but in many cases the maintainer helping *you* — to land a commit in a popular project, to have a good contributing experience, etc. Often, they can do an equivalent change fast but they want it to be yours and spend days on back-and-forth.
They are very perceptibe to the context. Beyond following the guidelines, they try their best to infer the things that may not be directly visible — assumptions, project aspirations, quality bar, tech debt areas, frustrating workflows, intentionally cut corners, style, vibes.
They see the end result as a holistic product. They look at their change in the context of the goals of the project, other people’s issues, other solutions. They act as if they are responsible for the whole thing—even if at the moment they only change a small part.
Responsibility is central to this. Most contributions—while great—need maintainers to add more responsibility to their plates. Test this change, figure out how this code worked before, research browser differences, etc. But there are some contributors who *take* responsibility.
They look for opportunities and propose meaningful changes. Changes that are scoped, pragmatic, usually incremental. Their changes “feel” more like “carving out” what should be “already there” rather than attaching something extra. They make the $PROJECT feel more $PROJECT-y.
There is no ego in their work. It’s clear they’re not *just* sending it to build up a resume. Their priority is to land the right change for the project (and figure out what it is!) rather than to land their exact idea. They might send *simple* changes but not spammy ones.
So far I’ve focused on the code (although the same applies to documentation too). However, they are usually active beyond that. In fact, I usually see these people start *outside* code: helping people in issues, testing other PRs, making reproducing cases for bug reports.
This makes sense because for established projects, many valuable activities *are* external to code. There’s nothing wrong with wanting to score a PR, but it’s noticeable when a person has a more community/product-driven mindset, and takes some routine work off maintainers’ plate.
They show an interesting balance of cultivating a vision for the parts they’re interested in while staying genuinely curious and protective of the project’s oberall existing vision.
How does one learn this? I don’t know. I’ve seen people fresh out of bootcamp who excel at this and I’ve also seen people with 10+ years of experience who don’t. Empathy helps. If you can imagine what it’s like to be in maintainer’s shoes, you’ll soon be ready to be a maintainer.
# Use composer local repositories
{
"name": "Mz/my-app",
"type": "project",
"repositories": [
{
"type": "path",
"url": "../Mz-bundles/MzGameBundle"
},
{
"type": "path",
"url": "../Mz-bundles/MzQueueBundle"
},
],
"require": {
"mz-app/mz-game-bundle": "*",
"mz-app/mz-queue-bundle": "*",
"mz-app/mz-shared-library": "*",
},
# CSS Architecture
Better organize your csshttps://www.webdesignerdepot.com/2020/12/2-smartest-ways-to-structure-sass/
https://www.sitepoint.com/architecture-sass-project/
http://smacss.com/
# Layer your Docker containers
It's generally a good idea to build your base image (which will not change, at least not often) and push that image to a repository. Then build your containers on top of that image. (For example you might build an older php5 image with all the neccessary legacy libraries, thus future proofing your application, in case these image or libraries are no longer available). It also speeds up build times.# The Clean architecture on Frontend
I guess not so much...https://github.com/martinszeltins/frontend-clean-architecture
https://dev.to/bespoyasov/clean-architecture-on-frontend-4311
/entities (or models) /app (aka interactors) /services /components /views /api /middleware /store routes.js helpers.js app.js
# Local Global Git hook
$ git config --global core.hooksPath /home/martins/GlobalGitHooks
# ~/GlobalGitHooks/post-checkout
#!/bin/sh
# If my-app container is running, then copy
# the git HEAD file with current branch to container.
if [ -z "$(docker ps | grep my-app-admin-php)" ]; then
echo ""
else
docker cp /home/martins/Programming/my-app/.git/HEAD my-app-admin-php:/var/www/git/HEAD
fi
# Install new GIMP fonts
If the installed fonts do not appear in GIMP after a restart, you can try placing them in one of GIMP's font directories.https://askubuntu.com/questions/304957/how-to-install-additional-gimp-fonts#:~:text=All%20you%20need%20to%20do,a%20great%20selection%20of%20fonts
For some reason, on Ubuntu 19.04 + GIMP 2.10 after installing the font in Ubuntu it did not appear in Gimp, not after restarting Gimp, nor restarting the system itself. What solved the problem was to copy the .TTF font file to one of GIMP fonts directory. Font directories can be managed in Edit > Preferences > Directories > Fonts menu. In my case, that directory was /home/snap/gimp/227/.config/GIMP/2.10/fonts.
# Git go back to a good commit (revert)
Delete the most recent commit (that wasn't pushed to remote yet) and go back to master.
git reset --hard HEAD~1Delete last bad commits and revert to the commit that was good (be careful that no other branches depond on this)
$ git reset --hard c75d5b93c07d6c8bfe4e973d6d1666c06acd1802
$ git push --force
-----------------------
Git go back to a good commit
1) Option 1
git revert SHA (commit sha which one you want to revert, the last one)
-------------------------
2) Option 2
Git checkout SHA1 (the good one)
Git push --force
Here is another clean way which I found useful how to revert last 3 committed / pushed commits.
git revert --no-commit HEAD~3..
git commit -m "your message regarding reverting the multiple commits"
This command reverts last 3 commits with only one commit.
Also doesn't rewrite history, so doesn't require a force push.
The .. helps create a range. Meaning HEAD~3.. is the same as HEAD~3..HEAD
# Linux disable mitigations for CPU vulnerabilities
https://sleeplessbeastie.eu/2020/03/27/how-to-disable-mitigations-for-cpu-vulnerabilities/# Fix external monitor screen tearing
Below are the steps which worked for me using two screens:1. Run xrandr to reveal the monitor connection names, resolutions and offsets:
Take note of the following two lines which are needed later
HDMI-0 connected 1920x1080+0+0
DVI-I-1 connected primary 1920x1080+1920+0
2. Use the following code as a template to fix screen tearing:
Replace the screen names, resolutions and offsets for each screen as per your configuration:
nvidia-settings --assign CurrentMetaMode="DVI-I-1: nvidia-auto-select @1920x1080 +1920+0 {ViewPortIn=1920x1080, ViewPortOut=1920x1080+1920+0, ForceFullCompositionPipeline=On}, HDMI-0: nvidia-auto-select @1920x1080 +0+0 {ViewPortIn=1920x1080, ViewPortOut=1920x1080+0+0, ForceFullCompositionPipeline=On}"
3. Add the previous command to the Startup Application Preferences to ensure the fix persists across reboots. Upon running the 2nd step command, your screens may blank to black momentarily before the fix is applied. You might like to watch this video to test whether the tearing has gone.
IN MY CASE:
$ nvidia-settings --assign CurrentMetaMode="HDMI-0: nvidia-auto-select @1920x1080 +0+0 {ViewPortIn=1920x1080, ViewPortOut=1920x1080+0+0, ForceFullCompositionPipeline=On}"
# All these go to the same email mailbox
[email protected][email protected]
[email protected]
# Instead of carring a flash drive to copy pro, just use failiem.lv & ej.uz instead :)
Instead of carring a flash drive to copy pro, just use failiem.lv & ej.uz instead :)# how to mount nas and add to dock
1. install nfs "sudo apt install nfs-common -y"mount temporarily with:
sudo mount -t nfs 192.168.1.90:/nfs/Public /mnt/NAS
or better...
2. add this to your fstab
192.168.1.90:/nfs/Public /mnt/NAS nfs defaults 0 0
3.
$ touch ~/.local/share/applications/nas.desktop
$ gedit ~/.local/share/applications/nas.desktop
add this content:
[Desktop Entry]
Name=NAS
Comment=NAS Disk
Keywords=nas;disk;cloud;
Exec=nautilus /mnt/NAS
Icon=/home/martins/Downloads/mycloud.png
Terminal=false
Type=Application
Categories=GNOME;GTK;Utility;Core;FileManager;
# Linux show time in terminal in upper right corner
$ while sleep 1;do tput sc;tput cup 0 $(($(tput cols)-11));echo -e "\e[31m`date +%r`\e[39m";tput rc;done &# Sudoers without password
martins ALL=(ALL) NOPASSWD: ALL$ sudo visudo
# Install and setup dnsmasq (works on Ubuntu 20.04)
Because NetworkManager doesn't work well together with dnsmasq installed seperately, we can actually configure NetworkManager to use dnsmasq internally.1. Edit the file /etc/NetworkManager/NetworkManager.conf, and add the line dns=dnsmasq to the [main] section, it will look like this:
[main] plugins=ifupdown,keyfile dns=dnsmasq [ifupdown] managed=false [device] wifi.scan-rand-mac-address=no2. Let NetworkManager manage /etc/resolv.conf
$ sudo rm /etc/resolv.conf ; sudo ln -s /var/run/NetworkManager/resolv.conf /etc/resolv.conf
3. Configure *.example.com wildcard domain
$ echo 'address=/.example.com/127.0.0.1' | sudo tee /etc/NetworkManager/dnsmasq.d/example.com-wildcard.conf
4. Reload NetworkManager and test the new configuration
NetworkManager should be reloaded for the changes to take effect.
$ sudo systemctl reload NetworkManager
Then we can verify that we can reach some usual site :
$ dig askubuntu.com +short
151.101.129.69
151.101.65.69
151.101.1.69
151.101.193.69
And lastly verify that the example.com and subdomains are resolved as 127.0.0.1:
$ dig example.com askubuntu.example.com a.b.c.d.example.com +short
127.0.0.1
127.0.0.1
127.0.0.1
Source: https://askubuntu.com/questions/1029882/how-can-i-set-up-local-wildcard-127-0-0-1-domain-resolution-on-18-04-20-04
# How to update grub (UEFI)
After installing a second OS, in my case I already had Ubuntu 20.04 installed and Fedora Linux installed. I wanted to replace Fedora with Kubuntu so I installed Kubuntu. But to my surprise the GRUB was not updated - it still showed Ubuntu and Fedora entries. So there was no way for me to boot into my new Kubuntu.P.S. I later noticed that there was a way to choose one partition as my "EFI partitin" - perhaps that would have also fixed the issue. Nonetheless...
I tried GRUB Customizer but even after installing to MBR and reloading and saving everything - it did not change GRUB at all.
I finally found a solution.
First, I noticed that there was a partition /boot/efi like this
Filesystem - Mounted on
/dev/sda1 - /boot/efi
Then I created a new directory and mounted /dev/sda1 there
$ sudo mkdir /mnt/bootefi
$ sudo mount /dev/sda1 /mnt/bootefi
And then installed GRUB on this new partition like this
$ sudo apt install grub-efi -y
$ sudo grub-install --efi-directory=/mnt/bootefi
After rebooting, finally my GRUB was updated and I saw my new Kubuntu entry!
# Temporarily disable foreign key checks in SQL
Sometimes you need to delete some database rows but it throws an error about foreign key checks. You can temporarily disable them in order to be able to perform the operation.SET foreign_key_checks = 0;
truncate stripe_product;
SET foreign_key_checks = 1;
# Can't find a file with find command? - try locate - and it is also much faster.
An alternative to using find is the locate command. This command is often quicker and can search the entire file system with ease. You can install the command with apt-get:sudo apt-get update
sudo apt-get install mlocate
The reason locate is faster than find is because it relies on a database of the files on the filesystem. The database is usually updated once a day with a cron script, but you can update it manually by typing:
sudo updatedb
Run this command now. Remember, the database must always be up-to-date if you want to find recently acquired or created files. (you can use cron job to keep it up to date with "sudo updatedb")
# Add AppImage to Gnome Shell as application shortcut (for Dock and Menu)
Works for Ubuntu 20.04 with GNOME Shell 3.361. create a new file /home/martins/.local/share/applications/MyApp.desktop
2. make it executable with $ sudo chmod 777 /home/martins/.local/share/applications/MyApp.desktop
3. add this content
[Desktop Entry]
Name=Navicat
Exec=/home/martins/Applications/Navicat.AppImage
Icon=/home/martins/Downloads/navicat.png
Type=Application
Categories=Utility;
# Reboot Linux using SysRq key combination
You can use the "Magic SysRq key" (https://en.wikipedia.org/wiki/Magic_SysRq_key) to bypass everything and call kernel routines directly. For example you could reboot your computer when everything else has frozen and locked up to prevent disk corruption. Although the kernel itself should be still responsive and not totally locked up in kernel panic.First, make sure that is is enabled.
$ cat /proc/sys/kernel/sysrq
This will show a number like 176 (which means that reboot/poweroff, remount and sync commands are enabled). It can be configured to allow more commands but make sure that the reboot command is enabled.
Then if you computer totally locks up, you can try pressing Alt + SysRq/PrtScr + B to reboot the computer. As long as the kernel is not totally locked up it should work.
P.S. If a keyboard does not have a seperate SysRq key then the PrtScr will double as SysRq key (even if it does not say so)
# hstr for raspberry pi
Since the Raspberry Pi uses ramhf architecture, here is a version of hstr for that:http://http.us.debian.org/debian/pool/main/h/hstr/hstr_2.3+ds-1_armhf.deb
# Brightness changer app for Linux
If for some reason you cant change screen brightness especially on laptops:$ sudo add-apt-repository ppa:apandada1/brightness-controller
$ sudo apt-get update
$ sudo apt-get install brightness-controller
# Better docker container ls command
$ docker container ls -a --format "table {{.Names}}\t{{.ID}}\t{{.Status}}\t{{.Command}}\t{{.Ports}}" | head -1 && docker
container ls -a --format "table {{.Names}}\t{{.ID}}\t{{.Status}}\t{{.Command}}\t{{.Ports}}" | tail -n +2
NAMES CONTAINER ID STATUS COMMAND PORTS
stamina-client-node fe737eb64b20 Up 11 minutes "docker-entrypoint.s…" 0.0.0.0:14078->3000/tcp, :::14078->3000/tcp
stamina-server-mysql 1697c773f069 Up 11 minutes "docker-entrypoint.s…" 33060/tcp, 0.0.0.0:29448->3306/tcp,
:::29448->3306/tcp
stamina-server-nginx f637c2979ce0 Up 11 minutes "/docker-entrypoint.…" 0.0.0.0:27028->80/tcp, :::27028->80/tcp
stamina-server-php 62dc58e2f45c Up 11 minutes "/app/server-php-ent…" 9000/tcp
# Avoid white rounded corners
You can set the inner rounded corner element smaller radius than the outer
<html>
<head>
<style>
body {
background: #292929;
}
.box {
width: 400px;
height: 400px;
background: white;
border-radius: 10px;
}
.box-header {
background: #353535;
width: 100%;
height: 50px;
border-radius: 5px;
}
</style>
</head>
<body>
<div class="box">
<div class="box-header"></div>
</div>
</body>
</html>
# Enable color terminal in docker containers + better bash
When we ssh into a docker container it usually doesn't have any colors or aliases. And the identifier is a strange root@3c4f8gjk which is not helpful at all so lets fix it!You can enable colors for ls and grep and the shell like this. Add this to bashrc or similar file. Plus better aliases.
1. Add docker/containers/CONTAINER/root/.bashrc
2. Add to Dockefile
# We don't want to work with sh (if bash is not installed in this container)
RUN apk add --no-cache bash
# The busybox version of grep sucks (doesn't show colors), let's install GNU grep
RUN apk add --no-cache --upgrade grep
# Add .bashrc config file for color and alias support
COPY ./root/.bashrc /root/.bashrc
.bashrc file contents
# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
xterm-color|*-256color) color_prompt=yes;;
esac
force_color_prompt=yes
color_prompt=yes
if [ "$color_prompt" = yes ]; then
PS1='🐳
${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@documentation-landing-nginx\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$
'
else
PS1='🐳 ${debian_chroot:+($debian_chroot)}\u@documentation-landing-nginx:\w\$ '
fi
unset color_prompt force_color_prompt
# If this is an xterm set the title to user@host:dir
case "$TERM" in
xterm*|rxvt*)
PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@documentation-landing-nginx: \w\a\]$PS1"
;;
*)
;;
esac
# enable color support of ls and also add handy aliases
alias ls='ls --color=auto'
alias grep='grep --color=auto'
alias fgrep='fgrep --color=auto'
alias egrep='egrep --color=auto'
# colored GCC warnings and errors
#export GCC_COLORS='error=01;31:warning=01;35:note=01;36:caret=01;32:locus=01:quote=01'
# some more ls aliases
alias ll='ls -alF'
alias la='ls -A'
alias l='ls -CF'
# Install and run Stamina on Ubuntu
$ wget https://raw.githubusercontent.com/Winetricks/winetricks/master/src/winetricks$ sudo chmod +x ./winetricks
$ ./winetricks -q mfc42
$ env WINEPREFIX="/home/martins/.wine" wine-stable C:\\\\windows\\\\command\\\\start.exe /Unix /home/martins/.wine/dosdevices/c:/users/Public/Desktop/Stamina.lnk
# JUST READ THE MANUAL!!
Seriously, sometimes you want to understand the thing... like RIGHT NOW!! But that is not how it works. Believe me - just read through the docs (yes, you won't want to do this). And then you will very likely understand it! I know, it's crazy but it works, really! trust me on this one!# Share your keyboard and mouse between multiple computers
If you would like to use the same keyboard and mouse for multiple computers - barrier is the answer!P.S. You might need to disable SSL in order for it to work! https://github.com/debauchee/barrier
# Use dcstart and dcstop instead of dcd and dcu
# Rotate between multiple DNS nameservers
This is useful if you've ever had your primary DNS server become unreachable for any reason. This allowed me to fix a strange bug with a VPN service where it would try to use the wrong nameserver by default.In your /etc/resolv.conf you can specify multiple nameservers like this with rotate option:
options rotate options timeout:1 nameserver 192.168.1.1 nameserver 10.0.0.1
This will use both nameserver in rotation and wait max. 1 second for answer before trying the next one.
# Use Javascript Proxies to automatically update the DOM when data changes
This is what all modern JavaScript frameworks do for you under the hood but just in case you need to do this manually here is how to do it.
/**
* Ads batch delete
*
* First, we initialize by registering DOM event listeners and DOM element
* observer. We also create a proxy for selectedAds array which automatically
* triggers a render function every time we change the array.
*
* The DOM Element Observer listens to when a page is changed, caused by
* a partial DOM re-render. We then call the initialize function again to
* re-render and register the new event listeners.
*/
import AdvertisementApi from '../api/advertisementApi'
import { observeDomElement } from './mutationObserver'
let domElementObserverRegistered = false
let selectedAds = new Proxy([], {
set(target, property, value) {
target[property] = value
render()
return true
}
})
const registerEventListeners = () => {
const { inputCheckboxes, btnCancel, btnDelete } = getDomElements()
inputCheckboxes.forEach(checkbox => {
checkbox.addEventListener('click', event => {
selectAd(event.target.value)
})
})
btnCancel.forEach(btn => {
btn.addEventListener('click', cancelSelection)
})
btnDelete.forEach(btn => {
btn.addEventListener('click', deleteSelectedAds)
})
}
const registerDomElementObserver = () => {
if (!domElementObserverRegistered) {
observeDomElement('#vacancies-container', initAdsBatchDelete)
domElementObserverRegistered = true
}
}
const selectAd = id => {
id = parseInt(id)
if (!selectedAds.includes(id)) {
selectedAds.push(id)
} else {
const index = selectedAds.findIndex(existingId => {
return existingId === id
})
selectedAds.splice(index, 1)
}
}
const cancelSelection = () => {
selectedAds.splice(0, selectedAds.length)
}
const deleteSelectedAds = () => {
confirmModal({
'title': Translator.trans('confirm_popup.vacancy.title'),
'description': Translator.trans('confirm_popup.vacancy.description'),
'confirmCallback': () => {
AdvertisementApi.delete(selectedAds.join(',')).then(() => {
location.reload()
})
}
})
}
/**
* DOM re-render will happen automatically whenever the selectedAds
* is changed thanks to the Proxy object.
*/
const render = () => {
const { btnDelete, btnCancel, spanCountSelected, inputCheckboxes } = getDomElements()
// Enable / disable delete and cancel buttons
if (selectedAds.length === 0) {
btnDelete.forEach(btn => {
btn.classList.add('disabled')
})
btnCancel.forEach(btn => {
btn.classList.add('disabled')
})
} else {
btnDelete.forEach(btn => {
btn.classList.remove('disabled')
})
btnCancel.forEach(btn => {
btn.classList.remove('disabled')
})
}
// Show selected count
spanCountSelected.forEach(span => {
span.innerHTML = `(${selectedAds.length})`
})
// Update checkbox checked status - when cancel is pressed or page is changed.
inputCheckboxes.forEach(checkbox => {
checkbox.checked = selectedAds.includes(parseInt(checkbox.dataset.vacancyId))
})
}
const initAdsBatchDelete = () => {
registerEventListeners()
render()
registerDomElementObserver()
}
/**
* We need to get new instances of DOM elements, since they change after
* partial DOM re-render, so we call getDomElements every time we need access to them.
*/
const getDomElements = () => {
let inputCheckboxes = document.querySelectorAll('.profile-list__checkbox-ad')
let btnDelete = document.querySelectorAll('.profile-list__delete-selected-ads .btn-delete')
let btnCancel = document.querySelectorAll('.profile-list__delete-selected-ads .btn-cancel')
let spanCountSelected = document.querySelectorAll('.profile-list__delete-selected-ads .count-selected')
return {
inputCheckboxes, btnDelete, btnCancel, spanCountSelected
}
}
export default initAdsBatchDelete;
const observeDomElement = (elementSelector, callback) => {
let element = document.querySelector(elementSelector)
if (!element) {
return
}
let hasAddedNotes = false
let observer = new MutationObserver(mutations => {
mutations.forEach(mutation => {
if (mutation.addedNodes.length > 0) {
hasAddedNotes = true
}
})
if (hasAddedNotes) {
callback()
}
})
observer.observe(
document.querySelector(elementSelector),
{ childList: true }
)
}
export { observeDomElement }
# CSS vertical timeline / pill menu
<div class="p-4 bg-india h-screen">
<ul>
<li class="relative pb-3 before:block before:z-[-1] before:absolute before:bg-gray-500 before:w-[2px] before:top-0 before:left-[5px] before:bottom-0">
<div class="flex">
<div class="text-gray-400">
<div class="w-3 h-3 rounded-full bg-white border-2 border-gray-500"></div>
</div>
<div class="pl-4 mt-[-7px]">First option</div>
</div>
</li>
<li class="relative pb-3 before:block before:z-[-1] before:absolute before:bg-gray-500 before:w-[2px] before:top-0 before:left-[5px] before:bottom-0">
<div class="flex">
<div class="text-gray-400">
<div class="w-3 h-3 rounded-full bg-gray-500"></div>
</div>
<div class="pl-4 mt-[-7px] font-bold">Selected option <br> with more text</div>
</div>
</li>
<li class="relative pb-3 before:block before:z-[-1] before:absolute before:bg-gray-500 before:w-[2px] before:top-0 before:left-[5px] before:bottom-0">
<div class="flex">
<div class="text-gray-400">
<div class="w-3 h-3 rounded-full bg-white border-2 border-gray-500"></div>
</div>
<div class="pl-4 mt-[-7px]">Third option</div>
</div>
</li>
<li class="relative pb-3">
<div class="flex">
<div class="text-gray-400">
<div class="w-3 h-3 rounded-full bg-white border-2 border-gray-500"></div>
</div>
<div class="pl-4 mt-[-7px]">Last option <br> with more text</div>
</div>
</li>
</ul>
</div>
#Turn any device into a secondary screen for your computer (like a tablet)
Perhaps it is better to just buy a good quality extendable laptop monitor. But below are some options if you want to use a tablet for example as your second monitor.
1. Deskreen (https://deskreen.com/) 2. https://www.tomshardware.com/how-to/use-tablet-or-phone-second-monitor-linux 3. https://www.omgubuntu.co.uk/2022/06/use-ipad-as-second-monitor-ubuntu-22-04 4. https://www.youtube.com/watch?v=hyt16QjM2No&ab_channel=Davidtendo 5. https://www.maketecheasier.com/use-smartphone-as-second-monitor-for-linux/ 6. https://github.com/santiagofdezg/linux-extend-screen
Or perhaps better to buy a portable laptop monitor like this one - ARZOPA Portable Monitor, 15.6'' 1080P FHD Laptop Monitor USB C HDMI Computer Display HDR Eye Care External Screen w/Smart Cover for PC Mac Phone Xbox PS5
#How to create a custom TypeScript type
What if you wanted to have a TypeScript type such as a URL or Unix Filesystem Path? This is not really possible with TypeScript currently but you can achive this at runtime by creating your own "class Type". Classes are really almost perfect for this.
class UnixFilesystemPath {
private path: string
constructor(path: string) {
this.path = this.validated(path)
}
public getPath(): string {
return this.path
}
public setPath(path: string): void {
this.path = this.validated(path)
}
private validated(path: string): string {
if (path !== '/bin/bash') {
throw new TypeError(`${path} is not a valid unix filesystem path.`)
}
return path
}
}
let bashPath: UnixFilesystemPath = new UnixFilesystemPath('/bin/bash')
console.log(bashPath.getPath())
bashPath.setPath('/bin/bash');
#Easiest method to Copy SSH Keys of One machine to another machine in Linux
For example, In my case I want common SSH Keys between my laptop and my office machine. When I tried to google it, the result surprised me by the over complications of methods to copy SSH keys.
So here I am sharing a few easiest methods we’ve used to copy SSH Keys from one machine to another.
Copy SSH Keys on a Real machine
This blog mainly covers this part. To copy SSH Keys from one machine to another real machine follow the below steps: * Open the Machine1 (e.g. your computer) and copy the .ssh folder to a USB stick or any other storage device. * Then Open your machine2 (e.g. your laptop).Now to copy SSH keys you can follow any of the following methods:
Method 1 Using SSH-ADD
- Put the .ssh folder under ~/ or /home/$machine2. in machine2
- Run ssh-add on machine2.
- Now, what ssh-add does exactly. SSH-ADD is like a helper program for SSH-AGENT. SSH-ADD adds private key identities (from your ~/.ssh directory) to the authentication agent (ssh-agent) so that the ssh-agent can take care of the authentication for you. For more detailed information Click here.
- Now test the SSH keys on the machine2 (e.g. your laptop).
- If you still find issues then please check the permissions of .ssh folder and make sure that correct permissions are given.
| File/Directory | Permission |
|---|---|
| .SSH Folder | drwx------ |
| id_rsa | -rw------- |
| id_rsa.pub | -rw-r--r-- |
Method 2 Manually Copy SSH Keys
- Copy .SSH folder from the storage device to any location of machine2.
- Now generate new SSH Keys in machine2.
- The next step is to replace the contents of all files of the .SSH folder of machine2 one by one with machine1’s SSH keys.
- Using this method you don’t need to worry about permissions of SSH keys.
Copy SSH Keys on a Virtual machine
To copy ssh keys on a remote machine is pretty straightforward. You can follow this wonderful digital ocean’s tutorial on SSH Keys. That's itIf you’ve any suggestions or difficulties to set up then please write down in the below comment section.
#OpenReplay - see what users do on your web app
OpenReplay replays what users do, but not only. It also shows you what went under the hood, how your website or app behaves by capturing network activity, console logs, JS errors, store actions/state, page speed metrics, cpu/memory usage and much more.
https://github.com/openreplay/openreplay
#Change linux monitor brightness using terminal
$ xrandr --current
DP-4 connected 1920x1080+3840+0 (normal left inverted right x axis y axis) 344mm x 193mm
1920x1080 144.10*+
now we can change the brightness:
$ xrandr --output DP-4 --brightness .5
or with a script like this:
#!/bin/sh
path=/sys/class/backlight/acpi_video0
luminance() {
read -r level < "$path"/actual_brightness
factor=$((100 / max))
printf '%d\n' "$((level * factor))"
}
read -r max < "$path"/max_brightness
xbacklight -set "$(luminance)"
inotifywait -me modify --format '' "$path"/actual_brightness | while read; do
xbacklight -set "$(luminance)"
done
< style="background: #f3f3f3; color: #5d5d5d; white-space: pre-wrap; font-size: 10px;"> #!/bin/bash while true do inotifywait -e modify /sys/class/backlight/acpi_video0/actual_brightness read -r max < /sys/class/backlight/acpi_video0/max_brightness read -r actual < /sys/class/backlight/acpi_video0/actual_brightness # Will give .x brightness brightness=`echo "scale=1; 100/$max*$actual/100" | bc` xrandr --output DP-4 --brightness $brightness done
#Atomic Design System
Atomic design is a modern way of building digital design. It works very similar to how component-based software design works in frontend development. Which is also how S in SOLID principles work (composing software of small modules/classes/functions)
It is a methodology composed of five distinct stages working together to create interface design systems in a more deliberate and hierarchical manner. The five stages of atomic design are:
Atoms
Molecules
Organisms
Templates
Pages
https://www.figma.com/community/file/1073848703449461140#Persistent VPN connection Ubuntu / GNOME
By default in Ubuntu / GNOME the VPN connection does not stay active forever but after some time it disconnects automatically. Let's fix that by making it so that it stays connected always.First lets find the name of our connection.
$ nmcli con showthen (for me - Home is the name of VPN that con show - shows)
$ nmcli connection modify Home connection.autoconnect-retries 0 $ nmcli connection modify Home vpn.persistent yes
#How to test your web app on mobile
It can be annoying to test a web app on mobile but there are some ways that can help.
- 1. Change your router's DNS settings to forward the web app to your local machine - mikrotik DNS static *.docker$
- 2. Use USB Debugging in Chrome
#Change first day of week to Monday (Linux, GNOME, Ubuntu)
Run locale to check which is the active locale$ localeTo adjust this, change or add the following lines in the LC_TIME section in /usr/share/i18n/locales/
week 7;19971130;5 first_weekday 2 first_workday 2(first_weekday 2 sets Monday as the first day of the week, and first_workday 2 sets Monday as the first work day of the week.)
And then update the system:
sudo locale-genThen log out and log in again.
#Enter sudo mode in Nautilus
Instead of having to launch nautilus by running sudo nautilus you can actually enter a sudo / admin mode directly from inside Nautilus itself by just typing "admin:///" in the address bar.
#Find all desktop file locations
$ sudo apt install locate -y $ locate "*.desktop" | sort > ~/Documents/desktop_files.txt
#How to Install VirtualBox Guest Additions in Ubuntu
Article: https://www.tecmint.com/install-virtualbox-guest-additions-in-ubuntu/
How to Install VirtualBox Guest Additions in Ubuntu
1. Update your system
$ sudo apt update $ sudo apt upgrade
2. Once upgrade completes, reboot your Ubuntu guest operating system to effect the recent upgrades and install required packages as follows.
$ sudo apt install build-essential dkms linux-headers-$(uname -r)
3. Next, from the Virtual Machine menu bar, go to Devices => click on Insert Guest Additions CD image as shown in the screenshot. This helps to mount the Guest Additions ISO file inside your virtual machine.
If the autorun does not start automatically you can run the autorun.sh from the mounted CD.
A terminal window will be opened from which the actual installation of VirtualBox Guest Additions will be performed.
#Turn on / off monitor using command line (Raspberry Pi Linux)
1. Install this
$ sudo apt install libxcb-dpms0
3. Enable it (create or edit this file)
$ sudo nano /etc/X11/xorg.conf
Section "Extensions"
Option "DPMS" "Enable"
EndSection
might also want to edit the same in /etc/X11/xorg.conf.d/*some-monitor.conf*
turn it on just in case
$ xset dpms
Now you might want to reboot your machine.
4. Add to cron
$ crontab -e
55 16 * * 1-5 DISPLAY=:0 xset dpms force off 55 8 * * 1-5 DISPLAY=:0 xset dpms force on
#How to create a systemd serivce
Creating a systemd service is very very simple.
1. Create a new `.service` file in `/etc/systemd/system` and give it permissions to run.
$ sudo touch /etc/systemd/system/my-service.service $ sudo chmod 777 /etc/systemd/system/my-service.service
then add this content to `my-service.service`
[Unit] Description=This is my service [Service] WorkingDirectory=/home/martins ExecStart=/home/martins/myserviced.sh Restart=always [Install] WantedBy=multi-user.target
2. Create the `myserviced.sh` script that will run in the background
$ touch ~/myserviced.sh $ sudo chmod +x ~/myserviced.sh
and add this content to it:
#!/bin/bash while : do echo "Hello from myserviced!" sleep 5 done
3. Reload systemd services so it sees our new service
$ sudo systemctl daemon-reload
4. Start and enable the service so it auto starts on boot.
$ sudo systemctl start my-service.service $ sudo systemctl enable my-service.service $ sudo systemctl status my-service.service
5. You can see the stdout/stderr using systemd's `journalctl`
$ journalctl -u my-service.service -f
#Test your HTTP requests
You can examine your HTTP request's payload and headers using webhook.site
https://webhook.site/#n8n automation tool (ITTT)
With n8n (just like node-red) you can create powerful automations by linking together apps and events (when this happens then do that).
version: "3.5"
services:
n8n-db:
image: mysql:8.0
container_name: n8n-db
restart: unless-stopped
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD-root}
- MYSQL_DATABASE=${MYSQL_DATABASE-n8n}
- MYSQL_USER=${MYSQL_USER-n8n}
- MYSQL_PASSWORD=${MYSQL_PASSWORD-n8n}
volumes:
- ./storage/db/mysql:/var/lib/mysql
n8n:
image: n8nio/n8n:0.207.1
container_name: n8n
restart: unless-stopped
environment:
- MYSQLDB_HOST=n8n-db
- MYSQLDB_PORT=3306
- MYSQLDB_DATABASE=${MYSQL_DATABASE-n8n}
- MYSQLDB_USER=${MYSQLDB_USER-n8n}
- MYSQLDB_PASSWORD=${MYSQL_DATABASE-n8n}
ports:
- 5678:5678
links:
- n8n-db
volumes:
- ./storage/n8n:/home/node/.n8n
# Wait 30 seconds to start n8n to make sure that MySQL is ready when n8n tries to connect to it
command: /bin/sh -c "sleep 30; n8n start"
#Backup docker database easily
It seems that you can simply specifiy a volume path where the database is stored and then you can easily simply backup your whole project directory that includes both your project files and the database.
So instead of doing this:
mysql:
image: mysql:8.0
volumes:
- dbdata:/var/lib/mysql/
volumes:
dbdata:
driver: local
You can do this instead:
mysql:
image: mysql:8.0
volumes:
- ./storage/db/mysql:/var/lib/mysql
#Best way to backup your entire PC disk - Clonezilla
I hate it that every time I install my OS, I need to manually set it up from scratch (install all the programs, customize everything etc.)
Turns out there is a very simple solution for this - clone your disk to an image and then restore it using Clonezilla.
You can simply clone your existing setup that is on your disk (including the partition table with all partitions) and then later restore it exactly as it was before. It will only copy the data itself so the image will be much smaller as it will also be compressed. For example if you have a 1 TB drive and only 100 GB of data on it the the image might be around 50GB only.
*Note that when you restore the image your physical disk must be the same size (or larger) as the disk you cloned. It IS possible to shrink it so it can be restored on a smaller drive than the original, which is more advanced, but that is also an option (feel free to do the research how to do this).
YouTube tutorial: https://www.youtube.com/watch?v=yQ9NpWZ74BU
#Use rclone to sync your NAS with Google Drive
You can use rclone to sync your NAS with Google Drive
It is very simple. First you need to configure your rclone and set up a remote (your google drive)
Basically you can follow the "$ rclone config" command or find a tutorial online how to add a Google Drive remote with root folder.
Then simply run this command to sync your NAS with Google Drive ("Google Drive" is the name of the remote here)
$ rclone sync --progress --skip-links "/mnt/NAS" "Google Drive:"
#Backup SD Card easily with dd
You can easily create a backup of your SD card by plugging your SD card into your computer using an SD card adapter. Then you can use this dd command to make a complete backup of your SD card to an image which later can be restored or mounted. Make sure you do not mount the SD card before you perform the dd command.
PS. You could also use the "conv=sparse" option to create an image file only as large as the data that is actually present on the SD Card.
Another option would be to use gzip to make an archive of everything on the SD Card.
# First find the device location $ sudo fdisk -l and then run dd $ sudo fdisk -l $ sudo umount /dev/sdb1 $ sudo dd if=/dev/sdX of=/path/to/image.img status=progress
Later, if you want to mount the image to take a look at the files inside, you can do this
$ mkdir /mnt/sdcard $ mount -o loop /home/martins/sd-card.img /mnt/sdcard
If the above throws an error and does not work you can try this (which worked for me on Ubuntu 20.04 LTS)
$ mkdir /mnt/sdcard $ sudo apt install kpartx -y $ sudo kpartx -av 2022-12-24-martins-phone-sd-card.img add map loop15p1 (253:0): 0 251099136 linear 7:15 32768 $ sudo mount -o loop /dev/mapper/loop15p1 /mnt/sdcard # yes /dev/mapper/...
#Tip: Use Docker base images with container registry
When building a project using Docker, use a Docker container registry (like Github Container Registry) where you can store your built images. That way users will not have to build these images themselves and it also makes sure everything always works.
First, if you use the Github Container Registry, you will need to create a PAT (Personal Access Token) that you will use when logging in with "docker login ghcr.io". Currently you need to go to Github > Settings > Developer Settings > Personal access tokens > Tokens (classic) and then generate a new classic token. You will need to give it permission to write, write and delete "packages"
After that you will need to login using this new generated token using "docker login ghcr.io"
Then you can create your docker images using Dockerfiles like normal. Then you can create scripts to build, push and pull these images to the Registry (see the example below). The pushed images can be found under "Packages" section right next to "Repositories".
Take a look at an example of how to do this here: https://github.com/martinszeltins/docker-layers
#Use home server for backup, sync tasks (tmux for running tasks in background)
The home server is perfect for when you need to run a long-running task like backup syncing. Tmux allows us to run these tasks and then you can log out and log back into the server an attach to the tmux session and you will see the task running. Preferably you should also get a UPS in case the power gets lost.
It is a good idea to backup your PCs / Phone regularly and upload to cloud for extra saftely.
#Debug elements that disappear
When you need to catch / debug an element that appears on mouseover and then disappears there are 2 ways you can do this:
1. Open Chrome Devtools > Sources and then you can Pause the execution with F8
2. If that doesn't work, you can set a breakpoint in Devtools > Elements > Break on > (subtree modification / node removal / attribute modification)
super helpful to know!!!!
#Shell scripting: set -ex and PS4
The PS4 variable is a special shell variable that is used to specify a string that should be printed before the command line is executed when the -x option is set. It is commonly used in conjunction with the set -x option, which causes commands and their arguments to be printed as they are executed.
# Exit immediately if any command exits with a non-zero status,
# and print commands and their arguments as they are executed.
PS4='\n+ [$(date +"%Y-%m-%d %T")] $ '
set -ex
#How to record computer audio on Linux (Audio Recorder)
[Note: There may now be better apps available for this purpose on Flathub]. For a long time I've been looking for how to record computer audio on Linux. There is a very cool app that lets you do that. In this app all you have to do is choose the "Source" and it will record from any source you want (mic or computer audio).
sudo apt-add-repository ppa:audio-recorder/ppa
sudo apt update
sudo apt install audio-recorder -y
#How to get Stamina music - record it!
If we cannot get the background music for Stamina we can simply record it by using the Audio Recorder (look above).
.
#Change first day of week... on Ubuntu 22.04 you need to generate locale!
It seems that starting with Ubuntu 22.04 you need to also run this generate command, just specifying the first weekday is not enough anymore.
$ sudo locale-gen en_US.UTF-8
#Overflow Text Ellipsis with Flex
<div class="w-[94%] bg-gray-100 p-2 text-xs mx-auto rounded my-12">
<div class="bg-gray-300 flex gap-1 border border-gray-300 rounded items-center p-1 pr-2">
<img
src="https://i.imgur.com/u5k1DxS.png"
class="max-w-[20px] max-h-[20px] mr-1"
/>
<span class="w-full overflow-hidden text-ellipsis whitespace-nowrap">
<span class="whitespace-nowrap font-semibold">Grill</span>
<span class="align-top text-[5px]">•</span>
<span class="whitespace-nowrap">Color: Silver</span>
<span class="align-top text-[5px]">•</span>
<span class="whitespace-nowrap">Size: 115x598x4,5cm</span>
<span class="align-top text-[5px]">•</span>
<span class="whitespace-nowrap">Color: Silver</span>
<span class="align-top text-[5px]">•</span>
<span class="whitespace-nowrap">Size: 115x598x4,5cm</span>
<span class="align-top text-[5px]">•</span>
<span class="whitespace-nowrap">Color: Silver</span>
<span class="align-top text-[5px]">•</span>
<span class="whitespace-nowrap">Size: 115x598x4,5cm</span>
</span>
<div class="flex-1"></div>
<span>X</span>
</div>
</div>
#Better git diff & fuzzy find any command, alias or file
alias git-diff='git diff | delta --line-numbers --side-by-side'
alias fzfcmd='eval $( (compgen -c && alias | sed '\''s/=.*//'\'' ) | sort -u | fzf)'
#Programming tip: When to use a whole file for a module / composable / class
When you have only one function like ProductImageURL it does not make sense to make it its own composable but should be put together with ProductImage composable or something like that.But as soon as the ProductImageURL function needs helper functions then it makes sense for it to be its own composable.
#Trusted Returns network issue
We were not able to start the project because of some docker network isssue.First the project would not stop because of some depends_on, after removing the depends_on dependency from docker-compose.yaml we were able to dcd the project.
After that, we were not able to start the project up because docker kept complaining about the network and something about external network. We removed all networks from the system and added manually the tr-network like this with the correct subnet IP address:
docker network create --driver bridge --subnet 172.18.0.0/16 --gateway 172.18.0.1 tr-network
After that we were able to start the project up.
Here is the full history log:
10082* docker compose kill
10083* docker ps
10084* docker rm tr-redis-1-sentinel
10085* docker rm tr-redis-1
10086* dcd
10087* docker stop tr-carrier-app
10088* docker rm tr-carrier-app
10089* dcd
10090* status
10091* docker-compose down --force
10092* dcd
10093* docker network rm tr-network
10094* dcls
10095* docker stop tr-cypress
10096* dcls
10097* which dcd
10098* docker network rm tr-network
10099* docker network ls
10100* docker network rm trusted-returns_tr-network
10101* docker network prune
10102* docker network ls
10103* cd ..
10104* git-reset
10105* docker-compose -f docker-compose.network.yaml up
10106* docker network ls
10107* docker network inspect trusted-returns_tr-network
10108* dcb && dcl
10109* dcd
10110* dcu
10111* docker network rm trusted-returns_tr-network
10112* docker network create tr-network
10113* docker network inspect tr-network
10114* docker network rm tr-network
10115* docker network create --driver bridge --subnet 172.18.0.0/16 --gateway 172.18.0.1 tr-network
10116* dcls
10117* dcu
10118* tr-update-projects
#How to tar with --exclude directories
tar -czvf /tmp/archive.tar.gz -C ~/destination/ \ --exclude=storage \ . $ scp [email protected]:/tmp/archive.tar.gz ~/destination/
#Use store a lot more in web apps
Basically its a good idea to use store more - a lot more. Something like a store for every view / page. To avoid prop drilling.#SSH port forwarding
Human: Can you explain this ssh command to me? Specifically the -L part and this part -L 3307:localhost:3306 because I don't understand what is happening there. Usually I just connect to an ssh server like this: ssh user@host and that is allssh -L 3307:localhost:3306 [email protected]
AI: Of course! The SSH command you provided is using port forwarding, which is a way to secure and encrypt connections between local services and remote services, or even between two remote services.
Here's a breakdown of the command:
ssh -L 3307:localhost:3306 [email protected]
ssh: This is the command to start a new SSH session.
-L: This flag specifies that local port forwarding should be set up. Local port forwarding lets you connect from your local machine to another server.
3307: This is the local port number on your machine.
localhost: Once the SSH tunnel is established, this specifies where the data sent to the local port (3307 in this case) should be forwarded to on the remote machine. In this case, the data will be forwarded to localhost, meaning the same machine you're SSHing into.
3306: This is the port on the remote machine (or in this case, localhost on the remote machine) to which the forwarded data will be sent.
[email protected]: This specifies the user (martins) and the host (api.aws.cloud) to SSH into.
Putting it all together:
When you run the command, you'll establish an SSH session with api.aws.cloud as user martins. Simultaneously, the SSH client on your local machine will listen on port 3307. Any connections made to port 3307 on your local machine will be forwarded through the secure SSH tunnel to port 3306 on localhost of the remote machine (api.aws.cloud).
This is particularly useful, for example, if you want to access a MySQL database running on api.aws.cloud at port 3306, but you don't want to expose the database directly to the internet for security reasons. By setting up the tunnel as above, you can connect to the database as if it's running on localhost:3307 on your machine, but all traffic goes securely through SSH.
So if you were to use a MySQL client on your local machine, instead of pointing it to api.aws.cloud:3306, you would point it to localhost:3307, and it would work just as if you were directly connected, but with the added security of SSH encryption.
Human: I think I need a simpler explanation. What exactly is the difference between ssh [email protected] and ssh -L 3307:localhost:3306 [email protected]
Won't the first command do the same thing?
AI: Certainly! Let's simplify the explanation.
ssh [email protected]
This command:
Logs you into api.aws.cloud as the user martins.
Gives you a shell (command line) on api.aws.cloud.
Essentially, you can run commands and interact with api.aws.cloud directly. That's it.
ssh -L 3307:localhost:3306 [email protected]
This command does two main things:
Like the first command, it logs you into api.aws.cloud as the user martins and gives you a shell.
In addition, it sets up a special "tunnel". Any data you send to port 3307 on your computer will be automatically and securely sent to port 3306 on api.aws.cloud.
Think of it like a secret underground pipe that connects port 3307 on your computer to port 3306 on the remote server. This is useful if you have applications on your computer that want to talk to services on the remote server without being directly exposed to the internet.
Main difference:
The first command just gives you access to the remote server's shell. The second command gives you that same access AND sets up a secure tunnel between a port on your machine and a port on the remote server.
#Linux copy directory to another location recursively with all files
If you have a directory like /home/NAS and would like to copy it to /destination/NAS you can use this command (the trailing slashes are important)rsync -avh --progress /home/NAS/ /destination/NAS/
you can even specify a different destination directory name like this:
rsync -avh --progress /home/NAS/ /destination/NAS2/
This means that if /home/NAS had files and directories like: /home/NAS/test.txt /home/NAS/subdir/something.txt
it will be copied like this: /destination/NAS/test.txt /destination/NAS/subdir/something.txt
or
/destination/NAS2/test.txt /destination/NAS2/subdir/something.txt
#How to use rclone to sync to Google Drive
If you want to sync your local NAS with your Google Drive, you can use rclone!1. Install the latest! version of rclone (instructions found on rclone website). Older versions might have outdated ways of authenticating with Google.
2. Run "rclone config"
- choose "Google Drive" number or type "drive"
- provide your own Google Application client id and secret
- Choose "Full access all files" when asked.
- When asked to edit advanced config say yes and choose root_folder_id to be your Google Drive NAS folder. The ID shows in address when you go into that folder.
- Accept all default values for the rest of advanced config.
- When asked to authorize "Use web browser to automatifcally authenticate rclone..." choose "No" if you are in a server environment and follow instructions.
At the end of config it should say something like this:
Configuration complete.
Options:
- type: drive
- client_id: 123546-55e.....apps.googleusercontent.com
- client_secret: GD-F4DF45DF48D94FDFDF
- scope: drive
- root_folder_id: 1Qds4-5416as-asdf-ad-adsf-asdf
- token: {"access_token":"54adfas5d6f4as6df54as6df5","token_type":"Bearer","refresh_token":"11//0asdfasdfasdfasdfasdfasdfasdfasdfadsfafdsasf","expiry":"2023-12-31T01:20:08.236453085+02:00"}
3. List your remote files to make sure it is the correct one:
$ rclone lsd "Google Drive NAS:"
4. Now you can sync your local files with your Google Drive
- run $ rclone sync --progress --skip-links "/mnt/NAS" "Google Drive NAS:"
#How to use rsync to sync / clone everything exactly
If you want to copy (clone) all files recursively preserving all ownership and permissions exactly as they are then you can use the following rsync command. The -a will enable archive mode to preserve everything exactly. The -H will also preserve hard links. -z will compress during transfer. Sudo is needed to preserve files that are owned by root, sudo will give permission to create those files as root. --ignore-errors will skip files that cannot be read.Local version:
sudo rsync -aHzv --progress --ignore-errors /source-dir/ /desitnation-dir/
Remote version:
sudo rsync -aHzv --progress --ignore-errors /source-dir/ user@remote-computer:/deestination-dir/
more / better / more complete - examples
alias app-sync-up='sudo rsync -aHzv --progress --ignore-errors --exclude 'app/server/var/' --exclude 'app/server/vendor/' --exclude 'app/client/node_modules/' --exclude 'app/client/.nuxt/' --exclude 'app/client/.output/' --exclude 'storage/' /home/martins/.vscode/.ssh/session/192.168.0.138/app/ [email protected]:/home/martins/Programming/app/'
alias app-sync-down='sudo rsync -aHzv --progress --ignore-errors --exclude 'app/server/var/' --exclude 'app/server/vendor/' --exclude 'app/client/node_modules/' --exclude 'app/client/.nuxt/' --exclude 'app/client/.output/' --exclude 'storage/' [email protected]:/home/martins/Programming/app/ /home/martins/.vscode/.ssh/session/192.168.0.138/app/'
#How to debug html elements that disappear
There are multiple ways you can debug elements that disappear (for example - tooltips, elements that disappear when browser loses focus etc.)1. You can pause execution (F8)
2. You can add a breakpoint in Elements tab for (subtree modifications, attribute modifications, node removal). For some reason, if you want to do this on it only works in Firefox
3. Another way for elements like dropdowns that lose focus you can open the dropdown, then navigate to another tab, then you can expand all elements in Elements tab, then navigate back to the tab where dropdown is and you can see the html tree, just don't click or you will lose focus.
4. Set a timeout in console to trigger debugger after some time. e.g. setTimeout(() => { debugger; }, 5000)
#Vue 3 - how to pass a property of reactive value as ref to a composable
<template>
<div>asdf</div>
</template>
<script setup lang="ts">
import { reactive, ref, toRaw, toValue, Ref, watch, toRef } from 'vue'
const useAddress = (address: Ref<{ city: string }>) => {
console.log('useAddress() called')
console.log(address)
watch(address, _address => console.log(`address changed: ${_address}`), { deep: true })
}
const myForm = reactive({
name: 'adsf',
address: {
city: 'NY'
}
})
// All the "magic" is in how toRef is used here by using a callback.
useAddress(toRef(() => myForm.address))
setTimeout(() => {
console.log('changing city to Riga')
myForm.address.city = 'Riga'
}, 3000)
</script>
#pbcopy linux alias
pbcopy is a command on macOS that copies the contents of a file to the clipboard. You can create an alias for it on Linux like this:alias pbcopy='xclip -selection clipboard'
install with apt install xclip
Then you can use it like this:
$ http https://jsonplaceholder.typicode.com/posts | pbcopy
#Add DNS static entries in router
This will be especially true with IPv6 but same principle applies to IPv4. Do not try to remember the IP addresses of your devices. Also, do not add host entries to each computer's /etc/hosts file. Instead, create host (static DNS) entries in your router directly so all devices would have access to them. In Winbox (MikroTik) got to IP > DNS > Static and add the entries there.server -> 192.168.1.10
work-pc -> 192.168.1.11
etc.
#How I debugged linuks.lv not working
The Mikrotik router was not receiving the packets. So either cloudflare was not sending or Mikrotik was dropping it immediately. The MikroTik was configured to only let CloudFlare IPs thru. Not sure exactly what I did but after clearing cloudflare cache, turning off/on TLS & proxy and removing and adding back NAT src CF address list, the problem dissappeared on its own. MikroTik packet sniffer & wireshark helped. ¯\_(ツ)_/¯
#Winapps
There are basically 2 ways of running Windows / Mac programs on Linux. One is by running a full Windows virtual machine with full Windows desktop. This takes a lot of resources and you get the whole Windows OS including the desktop. The other way is to use Wine which adds all neccessary APIs, syscalls, libraries for the apps to work. This is much more lightweight but many apps do not work or are broken. Winapps takes more of the virtual machien approach but hides the desktop. It also integrates the app launchers to make it seemless. Kind of like what WSLg does for Linux apps on Windows.https://github.com/winapps-org/winapps
Another interesting article to read: https://techbitz.dev/why-cant-you-run-windows-or-mac-programs-on-linux
#How to Wake laptop remotely
I wanted to be able to turn on my laptop computer remotely from anywhere in the world. But it turns out that this is not easily possible with laptops. There is one "hack" however IF your laptop supports "Restore on AC power" feature in the BIOS.1. Set up a simple Wake-on-LAN (WOL). This will not allow you to turn on a laptop if it is completely turned off but at least it will allow you to wake it from sleep. For this you will not need to do much. Simply download "Awake on LAN" from Flathub or similar app. Then enable Wake on LAN in BIOS (if not already enabled). Then enable it for your NIC card and that's it.
First, determine which NIC will be used, and then check whether it supports the Magic Packet™ using sudo ethtoolwhere is the device name of your NIC, e.g. eth0. This command will output some information about your the capabilities of your NIC. If this output contains a line similar to the following: Supports Wake-on: where contains the letter g, the NIC should support the WoL Magic Packet™ method (for the other letters look at man ethtool). Enabling WoL in the NIC To check whether WoL is enabled in the NIC, one could use sudo ethtool and look for Wake-on: If contains g and not d, then Magic Packet™ is enabled. However, if does contain d, WoL needs to be enabled by running the following command: sudo ethtool -s wol g On most systems, issuing this command is required after each boot. If the system's networking is configured via ifupdown, then it is easy to add the line up ethtool -s wol g below the interface's configuration stanza in /etc/network/interfaces. For example: shahar@shahar-backup:~$ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 10.0.0.1 netmask 255.255.255.0 gateway 10.0.0.138 up ethtool -s eth0 wol g This will ensure that WoL is enabled in the NIC on each boot.
2. If you really want to turn on your laptop remotely, there is a "hack" that you can do using the "Restore on AC power" feature paired with a smart plug. All you need to do is to to have the laptop plugged into a smart plug. Then simply turn the smart plug off and on. This will cause the laptop to turn on. This is because the "Restore on AC power" feature will cause the laptop to turn on when it detects that the power has been restored. This is not a perfect solution but it is a solution.
#How to fix NGINX not logging and how to get upstream headers logged out
It is sometimes hard to get nginx to log out access / error logs, especially when using docker as it forwards the access.log automatically to stdout so you need to tail docker container to see them.But even then you might want to create you own log format and see it being logged out but it does not seem to work. You need to know that there can be overrides in a server block that logs out something else and even your global logger in "http" block will not be used.
Here is how I solved it:
1. I created my own custom log format and put it in /etc/nginx/nginx.conf like this:
http {
...
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" "$scheme"';
access_log /var/log/nginx/access.log main;
# Custom log format for upstream response headers
log_format upstream_response_headers '$remote_addr - [$time_local] '
'"$request" '
'status $status '
'upstream_response_time $upstream_response_time '
'upstream_addr $upstream_addr '
'upstream_http_set_cookie: "$upstream_http_set_cookie" '
'upstream_http_x_powered_by: "$upstream_http_x_powered_by"';
...
include /etc/nginx/conf.d/*.conf;
}
Then I opened /etc/nginx/conf.d/default.conf and made sure that in my server block where I want logging it logs out MY custom log format like and **nothing else** (this is important):
server {
server_name fancy.website.com;
access_log /var/log/nginx/access.log upstream_response_headers;
http2 on;
listen 443 ssl ;
...
location / {
proxy_pass http://fancy.website.com;
set $upstream_keepalive false;
}
}
#How to use adminer Docker image to connect to local db
Docker containers have full access to your local network by default (it runs in bridge mode by default). Which means all container by default can see 192.168.x.x which is outside your container (host).
We can use that in our advantage to access database using adminer Docker image.
For server enter: 192.168.0.138:3309 (mind the exposed port 3009)
Username: root (or whatever)
Password: root (or whatever)
Database: common (sometimes it is necessary to specify the database, you will be able to switch between them later anyway).
And that's it! You do not need all that crazy stuff with binding adminer to specific container and db etc (this is not needed - docker run --link some_database:db -p ...... ) no need to link anything.
Screenshot:
#How to run GUI apps in Docker Container in Linux
Running a GUI app in a container is very easy and can be very useful for example if you need to add Cypress to your Docker project but want the GUI version. This means your project will ship with Cypress full gui version so every developer will not need to manually install Cypress on their own computers. Pretty cool! I finally found a solution that works.you might need to pass DISPLAY manually if it does not work automatically (see lipo)
export DISPLAY=:0
Tested on Ubuntu 20.04 and 24.04 - works on both
# allow containers access to your host's X server
$ xhost +local:docker
$ mkdir gui-docker-app
$ cd gui-docker-app
$ nano Dockerfile
# Only older versions of Ubuntu like 20.04
# will have firefox deb available for installing.
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y firefox
CMD ["/usr/bin/firefox"]
$ docker build -t gui-docker-app .
$ docker run -e DISPLAY=$DISPLAY \
--net=host \
-v /tmp/.X11-unix:/tmp/.X11-unix \
gui-docker-app
#How to log incomig Nginx requests
log_format incoming_requests '$remote_addr - [$time_local] '
'"$request" '
'status $status '
'host: "$http_host" '
'user-agent: "$http_user_agent" '
'x-forwarded-for: "$http_x_forwarded_for" '
'referer: "$http_referer" '
'cookie: "$http_cookie" '
'accept: "$http_accept" '
'content-type: "$http_content_type" '
'headers: "$http_headers"';
#CSS truncate overflow
Make sure you travel up the tree and add those overflow-hidden.
If adding truncate classes does not cause the CSS to truncate, add overflow-hidden to all parent elements until it does
element:
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
And for all parents:
overflow: hidden;
<div class="shadow m-2 p-2">
<div class="border m-2">
<div class="flex">
<div>asdf</div>
<div class="overflow-hidden">
<div class="flex-1 flex overflow-hidden">
<div class="min-w-4 bg-amber-400"></div>
<span class="flex-1 truncate">
<span>1Lorem ipsum, dolor sit amet consectetur adipisicing elit. </span>
<span>2Lorem ipsum, dolor sit amet consectetur adipisicing elit. Blanditiis, totam quam asperiores ullam officiis expedita eaque</span>
</span>
<div class="min-w-4 size-4 bg-amber-600"></div>
</div>
</div>
</div>
</div>
</div>
#Open / Closed principle tip
If we need to branch off we can create Factory component. For example, if we have a ShipmentItem but then we have DigitalShipmentItem and DHLShipmentItem etc. - use ShipmentItemFactory component which renders the needed variation component. Make each ShipmentItem variation composed of small, reusable components.If we need reusable *composables* we can create a base composable and add specific behavior on top (compose) like baseApi and lazyApi.
#How to solve Nuxt navigation problem
There is a problem when you have a brand new form, for example you are creating a new shipment. So you are on /shipment page. And then you press "Submit" and you save the Shipment and navigate to /shipment/uuid. But that triggers a Vue / Nuxt Router navigation and all component data is reset. You could solve this by simply keeping all data in the store and tell it to not reset the scroll position. Pretty simple!https://github.com/nuxt/nuxt/issues/30039
Solve this by keeping all page data in the store! This will prevent component data from getting destroyed.
I was able to achieve this by:
- Keeping all page data in the store
- Setting definePageMeta({ scrollToTop: false });
Repo:
martinszeltins/vue-nuxt-navigate-keep-component-state