QuestionId
stringlengths 8
8
| AnswerId
stringlengths 8
8
| QuestionBody
stringlengths 91
22.3k
| QuestionTitle
stringlengths 17
149
| AnswerBody
stringlengths 48
20.9k
|
---|---|---|---|---|
76388802 | 76388888 | When overloading the new operator in a global scope in C++, are we just redefining the original functionality? From what I understand operator and function overloading works when the overloads have different signatures, however when overloading new operator using
void* operator new(size_t n){
return malloc(n);
}
we change the underlying functionality itself and whenever we call new this new overload is called? Does this not violate the idea of overloads having different and unique signatures?
I tried overloading the new array operator with extra parameters and how that works is consistent with my current understanding of operator/function overloads. However overloading the new operator and new array operator with just one parameter is where I'm confused.
| Does overloading the new operator in C++ redefine the operator? | operator new is replacable (from cppreference, same link):
The versions (1-4) are implicitly declared in each translation unit even if the header is not included. Versions (1-8) are replaceable: a user-provided non-member function with the same signature defined anywhere in the program, in any source file, replaces the default version. Its declaration does not need to be visible.
This is not the usual function overloading. You cannot overload a function with same signature. Overloads must be distinguishable by their arguments. Nevertheless, colloquially one often talks about "overloading the new operator" which is ok in the wider sense of "overloading", but in strict c++ terminology not right.
|
76389588 | 76390201 | I have a really wide table in Oracle that's at type 2 dimension.
Records have from_dates and to_dates with the latest 'current' records having a high end date of 31st Dec 9999. There are currently two partitions on the table, one for the 'current' records and one for 'history' records.
There's a new requirement to only keep the last 12 months of records in the 'history' partition. I interpret this as keeping records that were valid in the last 12 months i.e. where the record's to_date < (this month- 11 months).
Normally if I wanted to get rid of records I'd just drop a partition, but in this case that wouldn't work as I need to retain some of the records in the existing 'history' partition.
Is there any partitioning strategy that could support this or am I barking up the wrong tree?
| Oracle partition / archive strategy for type 2 dimension table | You aren't accomplishing much with merely two partitions, "current" and "history". You need to repartition this by month. Then you can implement a rolling partition drop of partitions older than 12 months, which will require a bit of scripting.
Normally we use interval partitioning INTERVAL(NUMTOYMINTERVAL(1,'MONTH')) so we don't have to maintain partition adds manually or through scripting. However, unfortunately in your case you won't be able to because of your use of the special date 12/31/9999. This is the maximum date allowable in Oracle. Interval partitioning will internally add the interval to date values when determining whether a new partition is needed or not, and that will overflow the maximum date value allowed and raise an error. The use of this special date essentially disables the use of interval partitioning.
You have no choice but to either change your special "eternity" date to something less than one interval away from 12/31/9999 (anything less than 12/01/9999 would permit monthly interval partitioning, or anything less than 12/31/9998 would permit yearly interval partitioning). Or, as usually happens because code would have to be changed to accommodate these solutions, you have to manually build out partitions ahead of time or create a scheduled script that does it for you.
|
76391632 | 76391801 | How to find the matching element in array from different array? in C#
I have different varities of products and dynamically created attributes of every variety
public class SingleVariety
{
[JsonProperty("varietyId")]
public int VarietyId { get; set; }
[JsonProperty("varietyName")]
public string VarietyName { get; set; }
[JsonProperty("sku")]
public string Sku { get; set; }
public List<SingleAttribute> Attributes { get; set; }= new List<SingleAttribute>();
}
public class SingleAttribute
{
[JsonProperty("attributeName")]
public string AttributeName { get; set; }
[JsonProperty("attributeValue")]
public string AttributeValue { get; set; }
[JsonProperty("varietyId")]
public int VarietyId { get; set; }
}
and I have a filter array and trying to determine the selected variety based on the selection of attributes
sample variety model
var varities = new List<SingleVariety>() {
new SingleVariety {
Sku="testsku", VarietyId=1, VarietyName="test 1",
Attributes = new List<SingleAttribute>
{ new SingleAttribute { AttributeName = "Size", AttributeValue="Large" },
new SingleAttribute{ AttributeName = "Color", AttributeValue = "Red"}
}},
new SingleVariety {
Sku="testsku2", VarietyId=2, VarietyName="test 2",
Attributes = new List<SingleAttribute>
{ new SingleAttribute { AttributeName = "Size", AttributeValue="Small" },
new SingleAttribute{ AttributeName = "Color", AttributeValue = "Red"}
}},
new SingleVariety {
Sku="testsku3", VarietyId=3, VarietyName="test 3",
Attributes = new List<SingleAttribute>
{ new SingleAttribute { AttributeName = "Size", AttributeValue="Very Large" },
new SingleAttribute{ AttributeName = "Color", AttributeValue = "Black"}
}}
};
sample filter array
var filterObject = new List<SingleAttribute> {
new SingleAttribute { AttributeName = "Size", AttributeValue="Large" },
new SingleAttribute{ AttributeName = "Color", AttributeValue = "Red"}
};
variety with variety id 1 should be found because the filter object contains size=large, and color=red anyone can help me this?
I have used predicates, linq but I was not successfull
| How to find the matching element in a list from different list? in C# | One (flexible) way of doing this would be to chain your queries:
//Start with the whole set
var results = (IEnumerable<SingleVariety>)varities;
for (int i = 0; i < filterObject.Count; i++)
{
var filter = filterObject[i];
//Refine with each attribute match
results = results.Where(i => i.Attributes.FirstOrDefault(a => a.AttributeName == filter.AttributeName)?.AttributeValue == filter.AttributeValue);
}
If we inspect the output:
Console.WriteLine($"Count: {results.Count()}");
foreach (var result in results)
{
Console.WriteLine($"Name: {result.VarietyName}, Id: {result.VarietyId}");
}
We get the following:
Count: 1
Name: test 1, Id: 1
We cast it to the IEnumerable<SingleVariety> interface so that the chaining can work since the Linq methods will return that type.
A couple of things worth noting is that this will support any number of filters but it will be an and operation.
Technically, you don't need the filter variable. You could just do it inline but I like to write code this way for clarity.
|
76389895 | 76390260 | I need to change the Theme using stateprovider of flutter riverpod ,I dont understand what i did wrong here
void main() {
WidgetsFlutterBinding.ensureInitialized();
runApp(const ProviderScope(child: MyApp()));
}
class MyApp extends ConsumerWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context, WidgetRef ref) {
final isDarkTheme = ref.watch(isDarkThemeProvider.notifier).state;
return GestureDetector(
onTap: () {
FocusScope.of(context).unfocus();
},
child: ScreenUtilInit(
designSize: const Size(360, 690),
minTextAdapt: true,
splitScreenMode: true,
builder: (context, child) {
return MaterialApp(
builder: FToastBuilder(),
debugShowCheckedModeBanner: false,
title: 'code',
theme: isDarkTheme ? Themes().darkTheme : Themes().lightTheme,
onGenerateRoute: onAppGenerateRoute(),
routes: appRoutes(),
initialRoute: SplashPage.route);
},
),
);
}
}
this is my theme class and theme stateprovider
import 'package:flutter/material.dart';
import 'package:flutter_riverpod/flutter_riverpod.dart';
import 'package:tr_qr_code/utils/colors.dart';
class Themes {
final ThemeData lightTheme = ThemeData(
scaffoldBackgroundColor: colorWhite,
splashColor: Colors.transparent,
highlightColor: Colors.transparent,
appBarTheme: const AppBarTheme(color: colorWhite),
fontFamily: 'OpenSans',
useMaterial3: true,
);
final ThemeData darkTheme = ThemeData(
scaffoldBackgroundColor: colorBlack,
splashColor: Colors.transparent,
highlightColor: Colors.transparent,
appBarTheme: const AppBarTheme(color: colorBlack),
fontFamily: 'OpenSans',
useMaterial3: true,
);
}
final isDarkThemeProvider = StateProvider<bool>((ref) => false);
on the toggle switch inside the onTap the below code is used to update the state
onTap: () {
ref.read(isDarkThemeProvider.notifier).update(
(state) => !ref.read(isDarkThemeProvider.notifier).state);
},
i tried flutter clean, flutter pub upgrade etc ..
| The UI theme state is not updating until I resave the code while using flutter_riverpod stateprovider | Inside of MyApp class it should be
final isDarkTheme = ref.watch(isDarkThemeProvider);
And alternatively, inside your onTap you can toggle value like this:
onTap: () {
ref
.read(isDarkThemeProvider.notifier)
.update((state) => !state);
},
|
76388828 | 76388921 | I was unit testing in C#, and I found the following code gives an overflow exception:
using System;
public class Program
{
public static void Main()
{
int i = 0;
Console.WriteLine(new float[i - 1]);
// System.OverflowException: Arithmetic operation resulted in an overflow.
}
}
https://dotnetfiddle.net/clbgZ3
However, if you explicitly attempt to initialize a negative array, you get the following error:
Console.WriteLine(new float[-1]);
// Compilation error: Cannot create an array with a negative size
Why does initializing a negatively-sized array cause an overflow exception, and not a different type of error?
| Why does initializing a negatively-sized array cause an overflow exception? | This behaviour is explicitly specified in C# language specification, section 12.8.16.5
The result of evaluating an array creation expression is classified as
a value, namely a reference to the newly allocated array instance. The
run-time processing of an array creation expression consists of the
following steps:
(...)
The computed values for the dimension lengths are validated, as follows: If one or more of the values are less than zero, a
System.OverflowException is thrown and no further steps are executed.
(emphasis mine)
|
76390204 | 76390266 | I am currently struggling with container queries. As long as I just use min-width and max-width everything is fine and works well. As soon as I try to use logic operators like and/or it doesn´t work anymore.
.wrapper {
width: 300px;
container-name: wrapper;
container-type: inline-size;
}
.box {
background-color: #0000ff;
color: #ffffff;
width: 100px;
display: flex;
align-items: center;
justify-content: center;
}
@container wrapper (min-width: 300px) {
.box {
background-color: #00ff00;
}
}
@container wrapper (min-width: 300px) and (min-height: 0px) {
.box {
background-color: #ff0000;
}
}
<div class="wrapper">
<div class="box">
<span>test</span>
</div>
</div>
See the codepen here: https://codepen.io/Resolver1412/pen/dygxWKY
I would expect that the box would become red instead of green. Since the last container query would overwrite the previous one.
Anyone knows whats wrong or what might have happened? I currently use this chrome version: Version 112.0.5615.121
| how to use min-height, aspect-ratio, ... with container queries? | According to the docs:
The inline-size CSS property defines the horizontal or vertical size
of an element's block, depending on its writing mode. It corresponds
to either the width or the height property, depending on the value of
writing-mode.
– inline-size | MDN Web Docs
In your example, since writing-mode has the default value of horizontal-tb, only width is usable in @container queries.
You can switch to container-type: size to use both inline and block size in queries:
.wrapper {
width: 300px;
height: 60px;
container-name: wrapper;
container-type: size;
}
@container wrapper (width >= 300px) and (height >= 0px) {
.box {
background-color: #ff0000;
}
}
Try it:
.wrapper {
width: 300px;
height: 60px;
container-name: wrapper;
container-type: size;
}
.box {
background-color: #0000ff;
color: #ffffff;
width: 100px;
display: flex;
align-items: center;
justify-content: center;
}
@container wrapper (width >= 300px) {
.box {
background-color: #00ff00;
}
}
@container wrapper (width >= 300px) and (height >= 0px) {
.box {
background-color: #ff0000;
}
}
<div class="wrapper">
<div class="box">
<span>test</span>
</div>
</div>
Or, you can use aspect-ratio instead of width and height:
@container wrapper (aspect-ratio > 1 / 2) {
.box {
background-color: #ff0000;
}
}
Try it:
.wrapper {
container-name: wrapper;
container-type: size;
}
.box {
background-color: #0000ff;
color: #ffffff;
width: 100px;
display: flex;
align-items: center;
justify-content: center;
}
@container wrapper (width >= 300px) {
.box {
background-color: #00ff00;
}
}
@container wrapper (aspect-ratio > 1 / 2) {
.box {
background-color: #ff0000;
}
}
<div class="wrapper">
<div class="box">
<span>test</span>
</div>
</div>
|
76391160 | 76391823 | I'm sorry if this sounds vague or idiotic, but consider the following:
if I have a main function like so:
int main(void)
{
int red_wins = 0;
game_loop(&red_wins);
// code
printf("red has %d wins\n", red_wins);
}
then have a a game_loop function which calls another function that uses red_wins:
void game_loop(int *red_wins)
{
// code
particle_move(red_wins);
}
and then another function called particle_move which uses this red_wins variable:
void particle_move(int *red_wins)
{
// code
if (such and such)
{
*red_wins+= 10;
}
}
where red_wins is only modified in the particle_move function and particle_move is only called from the game_loop function, would it be better to do it as shown above or by passing a pointer to a pointer in the particle_move function? Or, but I've been told it's bad practice, by using a global variable red_wins? Or is there another, better, way that I've overlooked?
| Does it make a difference to pass a pointer to a pointer as an argument or simply pass the first pointer? | You only need to pass a pointer to pointer in the following situations:
Where the called function is updating the pointer value, not the thing being pointed to:void update( T **p )
{
*p = new_TStar_value();
}
int main( void )
{
T *var;
update( &var ); // updates var
}
Where the pointer is pointing to the first in a sequence of pointers:void foo( int **p )
{
// do something interesting
}
int main( void )
{
int **p = malloc( sizeof *p * ROWS );
if ( p )
{
for ( size_t i = 0; i < ROWS; i++ )
p[i] = malloc( sizeof *p[i] * COLS )
}
foo( p );
return 0;
}
In your case, particle_move is updating the same thing that game_loop is updating, so you don't need to add another layer of indirection. You're doing it right, here.
|
76388620 | 76388953 | I'm on Windows, I try to run label-studio on a docker-img and enable automatic annotations with a tesseract machine learning model provided from label-studio-ml-backend on another docker-img. (I'm discovering docker these days...)
Set up:
So far I was able to launch a docker with label-studio, and a docker with tesseract:
# Run the label-studio
docker run --name lbl-studio -it -p 8080:8080 -v label-studio-data:/label-studio/data/ heartexlabs/label-studio:latest label-studio
# DL and run tesseract image
git clone https://github.com/heartexlabs/label-studio-ml-backend
cd label-studio-ml-backend/label_studio_ml/examples/tesseract/
docker-compose up
At this point I have 2 images running on docker (or 3/4, I don't really know how to interpret the 'tesseract' image)
Network:
Here are some network info I could gather, I don't know how bas is the fact that lbl-studio is on 172.17 and the two others on 172.18...
# Get ips of images
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' server # 172.18.0.3
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' redis # 172.18.0.2
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' lbl-studio # 172.17.0.2
# server: 172.18.0.3
# redis: 172.18.0.2
# lbl-studio: 172.17.0.2
so far redis can ping server but can't ping lbl-studio
Problem:
But, when I go to http://127.0.0.1:8080/, create a project, and try to link the machine learning wizard (http://127.0.0.1:8080/projects/1/settings/ml > add model), I'm not able to connect the tesseract server to the lbl-studio.
The urls I tried to connect are:
http://127.0.0.1:9090/
http://172.18.0.3:9090/
Going Further:
I tried to dig deepper and ping the server from lbl-studio, but nothing happened
docker exec -it --user root lbl-studio /bin/bash
apt update
apt install iputils-ping
ping 127.18.0.3 # Nothing happening: 100% packet loss ;)
Question:
How can I connect lbl-studio to the server ?
Thank you for your help :)
| Connect 2 Docker images for label-studio | Add lbl-studio to tesseract's docker-compose file as third service. For connect from computer to services use http://127.0.0.1:9090 and http://127.0.0.1:9090. To connect between tesseract and lbl-studio use services name: http://lbl-studio:8080 and http://server:9090. Example:
version: "3.8"
services:
redis:
image: redis:alpine
container_name: redis
hostname: redis
volumes:
- "./data/redis:/data"
expose:
- 6379
server:
container_name: server
build: .
environment:
- MODEL_DIR=/data/models
- RQ_QUEUE_NAME=default
- REDIS_HOST=redis
- REDIS_PORT=6379
ports:
- 9090:9090
depends_on:
- redis
links:
- redis
volumes:
- "./data/server:/data"
- "./logs:/tmp"
lbl-studio:
image: heartexlabs/label-studio:latest
ports:
- 8080:8080
volumes:
- label-studio-data:/label-studio/data/
volumes:
label-studio-data:
|
76391465 | 76391838 | In the uvicorn exmaple, one writes uvicorn filename:attributename and by that start the server. However, the interface I have generated has no such method attributename in filename. Therefore, I am unsure what to pass as attributename.
Generated code in main.py
"""
Somename
Specification for REST-API of somename.
The version of the OpenAPI document: 1.0.0
Generated by: https://openapi-generator.tech
"""
from fastapi import FastAPI
from openapi_server.apis.some_api import router as SomeApiRouter
app = FastAPI(
title="SomeName",
description="Specification for REST-API of somename",
version="1.0.0",
)
app.include_router(SomeApiRouter)
| How to start from example diverging REST interface? | The attribute name that you should specify is the name of the variable that holds your FastAPI instance. As they say in the docs:
The ASGI application should be specified in the form path.to.module:instance.path.
In this case for you, it would be uvicorn main:app where main.py is the file your code is in and app is the name of the variable in that file that holds your FastAPI instance.
|
76390244 | 76390337 | Trying to clear an MDList
New and trying to learn :)
I have a simple gui using kivy and kivymd.
one button adds a list of TwoLineListItem's
and i would like the other button to clear the previously generated list.
the idea is for one button to add the list, the other button to clear it
so i can populate the list and clear it over and over but by clicking the relevant buttons.
.py file code -
from kivy.app import App
from kivymd.app import MDApp
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.widget import Widget
from kivy.properties import ObjectProperty
from kivy.lang import Builder
from kivy.uix.gridlayout import GridLayout
from kivymd.uix.list import TwoLineListItem
#designate our kv design file
Builder.load_file('shd_proto_kv_cfg.kv')
class Search(TwoLineListItem):
pass
class MyLayout(BoxLayout):
def __init__(self, **kwargs):
super().__init__(**kwargs)
def add_entries(self):
for x in range(0, 10):
item = Search()
self.ids.List.add_widget(item)
def rem_entries(self):
self.ids.List.remove_widget()
pass
class ShdApp(MDApp):
def build(self):
return MyLayout()
if __name__ == '__main__':
ShdApp().run()
.kv file code -
<Search>:
text: "Title"
secondary_text: "Description"
<MyLayout>:
orientation: "horizontal"
padding: 25
spacing: 10
BoxLayout:
orientation: "vertical"
Label:
text: "Marker01"
font_size: 25
background_color: (196/255, 140/255, 96/255, 1)
canvas.before:
Color:
rgba: self.background_color
Rectangle:
size: self.size
pos: self.pos
Button:
text: "add em"
on_press: root.add_entries()
Button:
text: "clear em"
on_press: root.rem_entries()
BoxLayout:
orientation: "vertical"
cols: 1
ScrollView:
MDList:
id: List
the error i get from running the above code -
TypeError: Layout.remove_widget() missing 1 required positional argument: 'widget'
as mentioned i would like to populate with one button and clear with the other
and be able to populate, clear, populate, clear.
im stuck on this and i do feel its my lack of understanding as a learner thats holding me back
can someone please help me with this code so i can get it working and play around to understand it better?
eventually i will repurpose it into my first project but i need some help understanding where im going wrong here lol
Thanks!
| Trying to clear kivymd MDList | Use:
self.ids.List.clear_widgets()
instead of:
self.ids.List.remove_widget()
The clear_widgets() method removes all the children of the object. The remove_widget() method removes just one child (and that child must be specified).
|
76390299 | 76390358 | I would like to use the names of the INDEX factor in my FUN function in tapply.
My data and function are more complex but here is a simple reproducible example :
data <- data.frame(x <- c(4,5,6,2,3,5,8,1),
name = c("A","B","A","B","A","A","B","B"))
myfun <- function(x){paste("The mean of NAME is ", mean(x))}
tapply(data$x, data$name, myfun)
Result :
A B
"The mean of NAME is 4.5" "The mean of NAME is 4"
Where I would like NAME to be A or B.
| R tapply : how to use INDEX names as a FUN additional argument? | One option would be to pass both the value and and the index column to your function:
data <- data.frame(
x = c(4, 5, 6, 2, 3, 5, 8, 1),
name = c("A", "B", "A", "B", "A", "A", "B", "B")
)
myfun <- function(x) {
sprintf("The mean of %s is %f", unique(x[[2]]), mean(x[[1]]))
}
tapply(data[c("x", "name")], data$name, myfun)
#> A B
#> "The mean of A is 4.500000" "The mean of B is 4.000000"
|
76388795 | 76388962 | In my code, I need to add the value of the specific checkbox that has been checked to the span element. The problem is it's first clicks of the checkbox, the calculation goes wrong.
As you can see, if you click a checkbox for the first time, it substract the value instead of adding it. Is there something I forgot to add? My code is listed below. Thank you in advance
const s = document.querySelectorAll('#enroll-subject');
const cue = document.getElementById('cu');
let cu = parseInt(cue.textContent.replace('Current Units: ', '').trim());
s.forEach(cb => {
cb.addEventListener('change', updateTotalUnits);
});
function updateTotalUnits() {
let totalUnits = cu;
s.forEach(cb => {
if (cb.checked) {
console.log("checked");
totalUnits += parseInt(cb.value);
} else {
console.log("not checked");
totalUnits -= parseInt(cb.value);
}
});
cue.innerHTML = `Current Units: ${totalUnits}`;
}
<div class="irreg-container" style="display:flex; flex-direction:column; text-align: center;">
<div class="header" style="display:flex; flex-direction:column;">
<span style="padding: 1em;" id="cu">Current Units: 15</span>
<span style="padding: .7em;font-size:1.3em;">Checkboxes</span>
</div>
<div class="subjects" style="display:flex; flex-direction: column;">
<table>
<tbody>
<tr>
<td style="width: 100%;">Checkbox 1</td>
<td style="width: 100%;"><input class="sbj-checkbox" type="checkbox" name="enroll-subject" value="4" id="enroll-subject">
</td>
</tr>
<tr>
<td style="width: 100%;">Checkbox 2</td>
<td style="width: 100%;"><input class="sbj-checkbox" type="checkbox" name="enroll-subject" value="4" id="enroll-subject">
</td>
</tr>
<tr>
<td style="width: 100%;">Checkbox 3</td>
<td style="width: 100%;"><input class="sbj-checkbox" type="checkbox" name="enroll-subject" value="4" id="enroll-subject">
</td>
</tr>
<tr>
<td style="width: 100%;">Checkbox 4</td>
<td style="width: 100%;"><input class="sbj-checkbox" type="checkbox" name="enroll-subject" value="4" id="enroll-subject">
</td>
</tr>
</tbody>
</table>
<div class="button-container" style="text-align: center;">
<button class="submit"> Submit </button>
</div>
</div>
</div>
| Incrementing/Decrementing a number using checkbox | It makes no sense that you are looping over all checkboxes each time, and then subtract the value of those that are not checked - because you never added the values of those in the first place.
Just keep working with the current cu value, and then either add or subtract the value of the currently changed checkbox only.
const s = document.querySelectorAll('#enroll-subject');
const cue = document.getElementById('cu');
let cu = parseInt(cue.textContent.replace('Current Units: ', '').trim());
s.forEach(cb => {
cb.addEventListener('change', updateTotalUnits);
});
function updateTotalUnits() {
if (this.checked) {
console.log("checked");
cu += parseInt(this.value);
} else {
console.log("not checked");
cu -= parseInt(this.value);
}
cue.innerHTML = `Current Units: ${cu}`;
}
<div class="irreg-container" style="display:flex; flex-direction:column; text-align: center;">
<div class="header" style="display:flex; flex-direction:column;">
<span style="padding: 1em;" id="cu">Current Units: 15</span>
<span style="padding: .7em;font-size:1.3em;">Checkboxes</span>
</div>
<div class="subjects" style="display:flex; flex-direction: column;">
<table>
<tbody>
<tr>
<td style="width: 100%;">Checkbox 1</td>
<td style="width: 100%;"><input class="sbj-checkbox" type="checkbox" name="enroll-subject" value="4" id="enroll-subject">
</td>
</tr>
<tr>
<td style="width: 100%;">Checkbox 2</td>
<td style="width: 100%;"><input class="sbj-checkbox" type="checkbox" name="enroll-subject" value="4" id="enroll-subject">
</td>
</tr>
<tr>
<td style="width: 100%;">Checkbox 3</td>
<td style="width: 100%;"><input class="sbj-checkbox" type="checkbox" name="enroll-subject" value="4" id="enroll-subject">
</td>
</tr>
<tr>
<td style="width: 100%;">Checkbox 4</td>
<td style="width: 100%;"><input class="sbj-checkbox" type="checkbox" name="enroll-subject" value="4" id="enroll-subject">
</td>
</tr>
</tbody>
</table>
<div class="button-container" style="text-align: center;">
<button class="submit"> Submit </button>
</div>
</div>
</div>
|
76391373 | 76391893 | Is there any way to use hard borders for RANGE?
The correct code is:
SELECT user_id,
created_at,
COUNT(*) OVER (ORDER BY created_at
RANGE BETWEEN '30 days' PRECEDING
AND '30 days' FOLLOWING) AS qty_in_period
But this sample is wrong:
SELECT user_id,
created_at,
COUNT(*) OVER (ORDER BY created_at
RANGE BETWEEN UNBOUNDED PRECEDING
AND (reg_date+interval) FOLLOWING) AS qty_in_period
reg_date is timestamp type, interval is '1 month'.
I know, it can be done with WHERE construction but I need to check possibility to do it via window function.
Please help.
| SQL window function and (date+interval) as a border of range |
Is there any way to use hard borders for RANGE?
No. The documentation is explicit about it:
In the offset PRECEDING and offset FOLLOWING frame options, the offset must be an expression not containing any variables, aggregate functions, or window functions.
Attempting to use such syntax raises the following error:
ERROR: argument of RANGE must not contain variables
One alternative uses a correlated subquery, or lateral join. The last query in your example could be written as:
select user_id, created_at, x.*
from mytable t
cross join lateral (
select count(*) qty_in_period
from mytable t1
where t1.created_at <= t.created_at + t.intval
) x
|
76390309 | 76390403 | Here's an easy algorithm question about stack and queue, could anyone please help me to look at what's wrong with my code?
Implement a queue with two stacks. The declaration of the queue is as follows. Implement its two functions appendTail and deleteHead, which perform the functions of inserting an integer at the end of the queue and deleting an integer at the head of the queue, respectively. (If there are no elements in the queue, the deleteHead operation returns -1 )
text
class CQueue(object):
def __init__(self) -> None:
self.__stackA = []
self.__stackB = []
def appendTail(self, val: int) -> None:
self.__stackA.append(val)
def deleteHead(self) -> int:
if self.__stackB == 0:
if self.__stackA == 0:
return -1
else:
self.__stackB.append(self.__stackA.pop())
return self.__stackB.pop()
else:
return self.__stackB.pop()
My code is above. I tried to separate the situation into: 1) B = 0, A = 0 (returns -1); 2) B = 0, A != 0 (transferring the elements from A to B), and 3) B != 0 (directly popping the front element)
The correct input and output should be:
input:
["CQueue","appendTail","deleteHead","deleteHead","deleteHead"] [[],[3],[],[],[]]
output:
[null,null,3,-1,-1]
Thank you for paying attention to the question and would be appreciated if you could help.
| Algorithm question - Stack and Queue - easy | There are two issues:
Comparing a list with 0 is not really useful, as that will never be true. To test whether a list is empty, you can use the not self.__stackA or len(self.__stackA) == 0, or a combination of the two.
When stack B is empty, but stack A has values, you should not transfer one element from the top of stack A to stack B, but should transfer all its elements, so that the element that was at the bottom of stack A gets to be the top element of stack B.
With those two remarks taken care of, your code would look like this:
class CQueue(object):
def __init__(self) -> None:
self.__stackA = []
self.__stackB = []
def appendTail(self, val: int) -> None:
self.__stackA.append(val)
def deleteHead(self) -> int:
if not self.__stackB: # don't compare with 0
if not self.__stackA:
return -1
else:
while self.__stackA: # Transfer ALL elements
self.__stackB.append(self.__stackA.pop())
return self.__stackB.pop()
else:
return self.__stackB.pop()
You can also avoid some duplication of code:
def deleteHead(self) -> int:
if not self.__stackB:
while self.__stackA:
self.__stackB.append(self.__stackA.pop())
if not self.__stackB: # Still nothing there...
return -1
return self.__stackB.pop() # Common action when there is data
Remark: I would have expected the stacks to be created with a specific class, because when you create them as standard lists, there is no reason why you could do non-stacky things with them, like reversing them, etc, which obviously is not to be allowed in this challenge.
|
76389693 | 76390406 | Here i attached the screenshot of the button when on hover take a effect like.
https://prnt.sc/xJSqNRU-IqdQ
I am try with skew effect but it doesnt work. I am also trying with skew effect but this doesn't work with button here. Let me provide the solution here.
.skew-button {
display: inline-block;
padding: 10px 20px;
background-color: #333;
color: #fff;
border: none;
font-size: 16px;
transition: all 0.3s;
position: relative;
overflow: hidden;
}
.skew-button:before {
content: "";
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-color: red;
transform-origin: top left;
transform: skewX(-20deg);
transition: all 0.3s;
z-index: -1;
opacity: 0;
}
.skew-button:hover {
background-color: red;
}
.skew-button:hover:before {
left: -100%;
opacity: 1;
transform-origin: top right;
transform: skewX(0deg);
}
<button class="skew-button">Hover Me</button>
| Apply effect of button as per the attached screemshot on hover | The easiest way to do this, that I can think of, is the following. There are explanatory comments in the code:
/* simple reset to remove default margins and padding, and to
force all browsers to use the same algorithm for sizing
elements: */
*,
::before,
::after {
box-sizing: border-box;
margin: 0;
padding: 0;
}
body {
background-image: radial-gradient(circle at 0 0, currentColor, slategray);
block-size: 100vh;
}
main {
/* to take all available space on the block-axis: */
block-size: 100%;
/* just an easy means of centering the content
visually, in both the block and inline axes: */
display: grid;
place-content: center;
}
button {
/* overriding the default background-color of the
<button> element: */
background-color: transparent;
/* removing the default border: */
border: 0 none transparent;
font-size: 2rem;
/* creates a stacking context so that the pseudo-
elements are positioned "within" this element
and can't be positioned "behind," or lower-than,
the <button>: */
isolation: isolate;
padding-block: 1em;
padding-inline: 2em;
/* in order to position the pseudo-elements in
relation to this element: */
position: relative;
}
button::before,
button::after {
/* custom CSS property for consistency across
the demo: */
--offset: 2em;
/* setting the background-color of both
pseudo-elements to the --background CSS
custom property (this is declared later)
or to the default color of white (#fff): */
background-color: var(--background, #fff);
/* required in order to render the pseudo-elements
to the page: */
content: '';
/* using clip-path to control the shape, rather
than using transforms; this is the initial state,
the simple rectangle: */
clip-path: polygon(0 0, 100% 0, 100% 100%, 0 100%);
position: absolute;
/* using inset to position the pseudo-elements with
an offset of 0 on the top, right, bottom and left;
this means the element takes all available space: */
inset: 0;
/* transitioning the clip-path: */
transition: clip-path 0.3s linear;
}
button::before {
/* declaring the custom --background property: */
--background: red;
/* positioning the element lower down the 'stack'
than the parent element (the use of isolation: isolate
was to keep the pseudo-elements in front of any
background-color property that might be set on
the <button>, despite a lower z-index): */
z-index: -1;
}
button::after {
--background: yellow;
z-index: -2;
}
/* here we update the clip-path, using the --offset variable,
var() and calc(), to set the clipping to create the
parallelogram shape */
button:hover::before {
clip-path: polygon( var(--offset) 0, 100% 0, calc(100% - var(--offset)) 100%, 0 100%);
}
button:hover::after {
clip-path: polygon( 0 0, calc(100% - var(--offset)) 0, 100% 100%, var(--offset) 100%);
}
<main>
<button>Some generic text</button>
</main>
JS Fiddle demo.
References:
background-color.
border.
box-sizing.
block-size.
calc().
content.
clip-path.
display.
font-size.
inline-size.
inset.
isolation.
margin.
padding.
padding-block.
padding-inline.
place-content.
position.
transition.
var().
z-index.
|
76391827 | 76391909 | There are two WAR files that rarely change and it should be running in my machine.
The tomcat path is /Users/myuser/Downloads/apache-tomcat-9.0.53 and the tomcat server in IntelliJ configuration use it also.
If I deploy the WARs in webapps directory, run another Java project that uses Tomcat in IntelliJ then I can't access these WARs. It seems like these wars are missing but there are in webapp path.
After stop tomcat server in IntelliJ I got these WARs available again.
In order to solve the problem, my current project configuration builds and deploys these WARs every time but it takes more time. How can I solve this?
Tomcat Server
Running
My project [local]
sec-web-services:war exploded [Republish] (first war file that rarely changes)
documents-web-services:war exploded [Republish] (second war file that rarely changes)
api-web-services:war exploded [Republish]
project-webapp:war exploded [Republish]
Tomcat Server Config in IntelliJ
Tomcat Home /Users/myuser/Downloads/apache-tomcat-9.0.53
Tomcat base directory /Users/myuser/Downloads/apache-tomcat-9.0.53
| How to keep WAR files running in Tomcat while I'm using IntelliJ? | IntelliJ IDEA Tomcat run configuration has an option to deploy applications already present in webapps directory:
|
76391039 | 76391910 | I have been dealing with customer bill of materials that contains references with numbers that are separated by a dash, rather than the full sequence of references spelled out, e.g. C1-4 instead of C1, C2, C3, C4 or C1 C2 C3 C4
Some customers will use a comma to separate references, some only space, sometimes there is a mix of the two, which also complicates things. Here is an example:
R161-169
R2 R5, R7 R11
R103-7
R26 R28-30 R42, R45-46, R62-65, R70-71, R92-102, R113-114
R31-35 R40-41 R56-61 R72-79 R86-91
R36, R38-39
I'm trying to make a macro that will generate the full set of references automatically for only the selected portion of the references column, and generate that full set of references in the column next to it.
Sometimes customers leave a blank line between sections of references. Empty cells should remain empty in the output.
I found one place online that had asked a very similar question - https://www.mrexcel.com/board/threads/splitting-out-numbers-separated-by-dash.679290/ but I did not understand the code there and it did not work for what I've been trying to do.
I am not great with VBS, but I got the below code running without throwing any errors so far, but it doesn't generate the full set of references. It just copies them as they are, and I do not know where I went wrong.
Sub SplitReferences()
'June 1, 2023
Dim inputRange As Range
Dim outputCell As Range
Dim inputArea As Range
Dim inputCell As Range
Dim startNum As Long
Dim endNum As Long
Dim i As Long
Dim outputString As String
' Set the input range where your values are
Set inputRange = Selection ' Use the selected range as input
' Set the output range where you want the split references
Set outputCell = inputRange.Offset(0, 1).Cells(1) ' Output in the column next to the input
' Loop through each area in the input range
For Each inputArea In inputRange.Areas
' Loop through each cell in the area
For Each inputCell In inputArea
' Split the value by dash
Dim parts() As String
parts = Split(inputCell.Value, "-")
' Check if there is a dash in the value
If UBound(parts) > 0 Then
' Extract the start and end numbers
startNum = Val(parts(0))
endNum = Val(parts(1))
Else
' If there is no dash, treat it as a single value
startNum = Val(parts(0))
endNum = Val(parts(0))
End If
' Loop through the numbers and add them to the output range
For i = startNum To endNum
outputCell.Value = inputCell.Offset(i - startNum).Value
Set outputCell = outputCell.Offset(1) ' Move to the next row
Next i
Next inputCell
Next inputArea
End Sub
| Excel Macro for changing references with numbers separated by a dash into the full set of references | Another approach:
Sub Tester()
Dim c As Range, arr, el, txt As String, rv As String, sep As String
For Each c In Selection.Cells 'loop selected range
txt = Trim(c.Value)
If Len(txt) > 0 Then 'cell has a value?
arr = Split(Normalize(txt), " ")
rv = ""
sep = ""
For Each el In arr
'convert to sequence if value has a dash
If InStr(el, "-") > 0 Then el = Sequence(CStr(el), " ")
If Len(el) > 0 Then rv = rv & sep & el
sep = " "
Next el
With c.Offset(0, 1)
.WrapText = True
.Value = rv
.EntireRow.AutoFit
End With
End If 'has content
Next c
End Sub
'Normalize the input to replace unwanted characters with spaces
' Remove runs of >1 space, and spaces around "-"
Function Normalize(ByVal txt As String)
Dim arr, el
arr = Array(vbLf, vbCr, Chr(160), ",", ";", ":") 'replace these with a space
For Each el In arr
txt = Replace(txt, el, " ")
Next el
Do While InStr(1, txt, " ") > 0 'remove any multi-space runs
txt = Replace(txt, " ", " ")
Loop
txt = Replace(txt, " -", "-") 'remove any spaces next to dashes
txt = Replace(txt, "- ", "-")
Normalize = txt
End Function
'Return a sequence from a pattern like [letter][number1]-[number2],
' separated by `sep`
Function Sequence(txt As String, sep As String)
Dim prefix, rv As String, sp, arr, v1, v2, i As Long
prefix = GetPrefix(txt) 'extract leading non-numeric character(s)
arr = Split(txt, "-")
v1 = NumberOnly(arr(0))
v2 = NumberOnly(arr(1))
If Len(v1) > 0 And Len(v2) > 0 Then
'handle case like R102-4, R102-24
If Len(v2) < Len(v1) Then v2 = Left(v1, Len(v1) - Len(v2)) & v2
v1 = CLng(v1)
v2 = CLng(v2)
For i = v1 To v2 'assumes V2 > v1...
rv = rv & sp & prefix & i
sp = sep
Next i
End If
Sequence = rv
End Function
'return the first [whole] number found in `txt`
Function NumberOnly(txt)
Dim i As Long, c, rv
For i = 1 To Len(txt)
c = Mid(txt, i, 1)
If c Like "#" Then
NumberOnly = NumberOnly & c
Else
If Len(NumberOnly) > 0 Then Exit Function
End If
Next i
End Function
'Return leading non-numeric character(s)
Function GetPrefix(txt As String)
Dim i As Long, c As String, rv
For i = 1 To Len(txt)
c = Mid(txt, i, 1)
If c Like "#" Then Exit For
rv = rv & c
Next i
GetPrefix = rv
End Function
|
76388622 | 76388983 | I am a bit confused as how to interpret the dependencies list available on nuget.org and in the NuGet Package Manager in Visual Studio...
Sometimes, the list contains frameworks and a sub-list of dependencies per framework. Sometimes it does not contain the framework I have targeted for a particular project at all, how do I interpret this?
For example, this package's latest stable version is 7.0.5. I have chosen to only update my version to 6.0.16 because that is the latest version which mentions net6 as a dependency.
Versions 7 and above only mention net7.0 with dependencies, hence why I only dare to update to latest 6.X.X. Is my interpretation correct? Should I only update to the latest version mentioning net6 in the dependecy list (when my project targets net6), or can I update to the 7.X.X versions anyway?
| Understanding NuGet Package dependecies from Nuget.org | A package author can choose what target framework monikers (TFMs) to support; this can be diverse or ultra-specific. In this case (Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore), they have gone "specific", with the v6 versions of the library only targeting net6, v7 versions of the library only targeting net7, etc; so yes, your interpretation is correct and 6.0.16 looks to be the latest you can use with .net 6; if you attempt to install a v7 version of the lib, it should fail to install the package and/or build, because something targeting net7 could be using APIs that simply do not exist in net6, giving runtime failures - the package system attempts to protect you from that.
Now, it might be that the package could work on net6, but targeting multiple frameworks is effort that requires testing, and may involve code changes (in particular #if sections or similar, to use better approaches when available, or fallback approaches when not). It is not unreasonable for authors to say, more simply:
The vOlder version is what it is - we may supply out-of-band updates for security fixes or bugs that cross a certain threshold, and you can keep using vOlder with your netOlder applications, but if you want newer features you'll need to use vNewer on netNewer.
This is on a per-package basis, and many packages are far more diverse in what they target, either by having a wide range of TFMs, or by having wide-reaching targets such as netstandard2.0 (which in theory works on a wide range of platforms, by virtue of only consuming a common intersection of APIs).
|
76388773 | 76389015 | Need an Input on the below XPath requirement:
XML:
<component>
<Bundle>
<entry>
<resource>
<Condition>
<id value="123456"/>
</Condition>
</resource>
<search>
<mode value="match"/>
</search>
</entry>
<entry>
<resource>
<Condition>
<id value="123456"/>
</Condition>
</resource>
<search>
<mode value="match"/>
</search>
</entry>
<entry>
<resource>
<Condition>
<id value="654321"/>
</Condition>
</resource>
<search>
<mode value="include"/>
</search>
</entry>
</Bundle>
</component>
XSLT:
<xsl:with-param name="entries" select="//Bundle/entry/resource/Condition[not(id/@value=following::Condition/id/@value)]"/>
Currently it is eliminating duplicate condition entries. Now need to enhance it to
consider <entry> with <mode value="match"/> and its respective conditions only.
tried with different approaches likes //Bundle/entry[<<expression>>]/resource/Condition[not(id/@value=following::Condition/id/@value)] and //Bundle/entry/resource/Condition[not(id/@value=following::Condition/id/@value) and ../../<<expression>>]
Not giving expected result. Any input will be helpful.
expected output:
<Condition>
<id value="123456"/>
</Condition>
want only conditions which are not duplicates as well as matching <mode value="match"/> only.
| Xpath: evaluate condition at parent node along with filtering duplicate entries | Consider the following example of Muenchian grouping:
XSLT 1.0
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/>
<xsl:strip-space elements="*"/>
<xsl:key name="k1" match="entry[search/mode/@value='match']" use="resource/Condition/id/@value" />
<xsl:template match="/component">
<output>
<xsl:copy-of select="Bundle/entry[search/mode/@value='match'][count(. | key('k1', resource/Condition/id/@value)[1]) = 1]/resource/Condition"/>
</output>
</xsl:template>
</xsl:stylesheet>
Applied to your input example, this will return:
Result
<?xml version="1.0" encoding="UTF-8"?>
<output>
<Condition>
<id value="123456"/>
</Condition>
</output>
In XSLT 2.0 or higher, you could reduce this to:
<xsl:stylesheet version="2.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/>
<xsl:template match="/component">
<output>
<xsl:for-each select="distinct-values(Bundle/entry[search/mode/@value='match']/resource/Condition/id/@value)">
<Condition>
<id value="{.}"/>
</Condition>
</xsl:for-each>
</output>
</xsl:template>
</xsl:stylesheet>
|
76390149 | 76390411 | class ArticleResponse(
val id: Int,
val previewContent: String
)
fun ArticleResponse.mapToEntity() = Article(
id = id,
previewContent = previewContent,
content = null
)
class SingleArticleResponse(
val id: Int,
val content: String
)
fun SingleArticleResponse.mapToEntity() = Article(
id = id,
previewContent = null,
content = content
)
@Entity(tableName = "articles")
class Article(
@PrimaryKey
@ColumnInfo(name = "id")
val id: Int,
@ColumnInfo(name = "preview_content")
val previewContent: String?,
@ColumnInfo(name = "content")
val content: String?
)
Explanation:
When you look at the code, you have to know that a call to API-Endpoint X gives me a list of ArticleResponse. This response doesn't give me the content property.
When I then want to view the article, I call API-Endpoint Y, which gives my SingleArticleResponse as a Response. This response doesn't give me the previewContent property.
Problem:
No matter what I do, one of the properties content or previewContent will always be null in my local database.
Question:
How can I tell Room, that it should not override the property with null, if it previously has not been null? Is this possible?
| Android room: Don't override properties with null | I think the easiest and most robust way to do this is to have 6 separate update queries and run them all inside a transaction.
@Dao
interface ArticleDAO {
@Query("UPDATE Article SET content = :content WHERE id = :id")
suspend fun updateContent(id: Int, content: String)
@Query("UPDATE Article SET previewContent = :previewContent WHERE id = :id")
suspend fun updatePreviewContent(id: Int, previewContent: String)
//Other update queries here
}
fun updateArticle(article: Article) {
database.withTransaction {
if(article.content != null) {
articleDao.updateContent(article.id, article.content)
}
if(article.previewContent != null) {
articleDao.updatePreviewContent(article.id, article.previewContent)
}
//Other updates here ...
}
}
Let me know if this answers your question.
|
76390317 | 76390432 | I'm trying to write a program that waits when it sees it's memory is becoming full. It finds out what the current available memory is using /proc/meminfo. Now I'm trying to test it by running systemd-run --scope -p MemoryMax=100M -p MemorySwapMax=0 but /proc/meminfo is still returning the old values (which I kind of get why it does that).
Is there another place or way I can retrieve the available memory that does look at the limits set by systemd-run?.
Thanks in advance!
| systemd-run memory limit is not shown in /proc/meminfo, is there another way? | Systemd uses cgroups.
$ systemd-run -P --user -p MemoryMax=10240000 -p MemorySwapMax=0 bash -c 'd=/sys/fs/cgroup/$(cut -d: -f3 /proc/self/cgroup); tail $d/memory{.swap,}.max'
Running as unit: run-u923.service
==> /sys/fs/cgroup//user.slice/user-1000.slice/user@1000.service/app.slice/run-u923.service/memory.swap.max <==
0
==> /sys/fs/cgroup//user.slice/user-1000.slice/user@1000.service/app.slice/run-u923.service/memory.max <==
10240000
|
76388859 | 76389025 | I have this router:
const router = createBrowserRouter(
[
{
path: '/',
element: <Navigate to={'/dashboards'}/>
},
{
path: '/dashboards',
element: <Dashboards/>,
loader: () => store.dispatch(retrieveWarehouses()),
children: [
{
path: ':warehouse',
element: <Dashboard/>,
loader: ({ params }) => store.dispatch(monitorWarehouse(params.warehouse))
}
]
}
]
)
Defined as is, the <Dashboard/> component is not rendered, only its parent dashboard list (Dashboards, notice the plural). The loader of the child Dashboard is still triggered though.
If I don't use a nested route:
const router = createBrowserRouter(
[
{
path: '/',
element: <Navigate to={'/dashboards'}/>
},
{
path: '/dashboards',
element: <Dashboards/>,
loader: () => store.dispatch(retrieveWarehouses()),
},
{
path: '/dashboards/:warehouse',
element: <Dashboard/>,
loader: ({ params }) => store.dispatch(monitorWarehouse(params.warehouse))
}
]
)
The child component Dashboard is rendered properly, but the loader of the parent is not triggered.
Here are the components:
Dashboards
const Dashboards: React.FC<any> = () => {
const {
warehouses,
loading
} = useAppSelector(selectWarehouseListState)
if (loading) {
return (
<div className={'warehouse-list'}>
<h1>Select warehouse</h1>
<Spinner/>
</div>
)
}
return (
<div className={'warehouse-list'}>
<h1>Select warehouse</h1>
{
warehouses.map((wh: Warehouse) => (
<NavLink to={`/dashboards/${wh.name}`} key={wh.name}>
<div className={'selectable-warehouse container'}>
{wh.name}
</div>
</NavLink>
))
}
</div>
)
}
Dashboard
const Dashboard: React.FC<any> = () => {
const { loading } = useAppSelector(selectWarehouseState)
const { warehouse } = useParams()
const dispatch = useAppDispatch()
useEffect(() => {
return () => {
dispatch(stopMonitorWarehouse(warehouse))
}
}, [dispatch])
if (loading) {
return (
<div className={'dashboard loading shrinkable'}>
<div className={'header'}>
<NavLink to={'/dashboards'} className={'nav-back'}>
<ArrowBack/>
</NavLink>
<div className={'selected-warehouse-name'}>{warehouse}</div>
</div>
<div className={'status connecting'}>status: connecting</div>
<Spinner/>
</div>
)
}
return (
<div className={'dashboard active shrinkable'}>
<div className={'header'}>
<NavLink to={'/dashboards'} className={'nav-back'}>
<ArrowBack/>
</NavLink>
<div className={'selected-warehouse-name'}>{warehouse}</div>
</div>
<div className={'status connected'}>status: connected</div>
<div className={'logs-metrics'}>
<Logs/>
</div>
</div>
)
}
How can I access /dashboards/foo and trigger both loaders ?
| How to render a subroute and trigger react router parents loader? | If the Dashboards component is rendered as a layout route then it necessarily should render an Outlet for its nested routes to render their content into.
Example:
import { Outlet } from 'react-router-dom';
const Dashboards: React.FC<any> = () => {
const {
warehouses,
loading
} = useAppSelector(selectWarehouseListState)
return (
<div className={'warehouse-list'}>
<h1>Select warehouse</h1>
{loading ? (
<Spinner />
) : (
<>
{warehouses.map((wh: Warehouse) => (
<NavLink to={`/dashboards/${wh.name}`} key={wh.name}>
<div className={'selectable-warehouse container'}>
{wh.name}
</div>
</NavLink>
))}
<Outlet />
</>
)}
</div>
);
};
However in my use case I don't want both components to be rendered, the
latter should take precedence of the other.
If I understand this part you want the Dashboards and Dashboard components rendered independently, but the Dashboards's loader function to still be called even while on a nested route. For this you'll render Dashboards as a nested index route where the loader function is on the parent layout route.
Example:
const router = createBrowserRouter(
[
{
path: '/',
element: <Navigate to="/dashboards" />
},
{
path: '/dashboards',
loader: () => store.dispatch(retrieveWarehouses()),
children: [
{
index: true,
element: <Dashboards />,
},
{
path: ':warehouse',
element: <Dashboard />,
loader: ({ params }) => {
store.dispatch(monitorWarehouse(params.warehouse));
},
}
]
}
]
);
The "/dashboards" route will render an Outlet by default when no element is specified, and the Dashboards component will be rendered when the parent route is matched. No changes to the Dashboards component would be required.
|
76390643 | 76391915 | Consider this code
@Mapper
@RequiredArgsConstructor
public abstract class QuestionCommentMapper {
protected final QuestionService questionService;
public abstract QuestionComment dtoAndAuthenticationToQuestionComment(QuestionCommentRequestDto dto,
@Context Authentication auth);
@AfterMapping
protected void enrichWithOwner(@MappingTarget QuestionComment questionComment, @Context Authentication auth) {
Account owner = AuthenticationProcessor.extractAccount(auth);
questionComment.setOwner(owner);
}
@AfterMapping
protected void enrichWithQuestion(@MappingTarget QuestionComment questionComment,
@Context QuestionCommentRequestDto dto) {
Long questionId = dto.questionId();
Question question = questionService.getById(questionId);
questionComment.setQuestion(question);
}
}
Would Mapstruct pass a QuestionCommentRequestDto object to the enrichWithQuestion() method which is a source in the original mapping method? If not, how can I perform that second "enriching" without forgoing Mapstruct's code generation? If I write anything in the mapping method (e.g. use the QuestionCommentRequestDto instance to set QuestionComment's Question field, as in my enrichWithQuestion() method), Mapstruct won't generate anything, and I'll have to basically write everything manually
| Is Mapstruct capable of passing a source object to an @AfterMapping method? | Mapstruct is capable of that with no extra code needed. Here's the generated method
@Override
public QuestionComment dtoAndAuthenticationToQuestionComment(QuestionCommentRequestDto dto, Authentication auth) {
if ( dto == null ) {
return null;
}
QuestionComment questionComment = new QuestionComment();
questionComment.setText( dto.text() );
enrichWithOwner( questionComment, auth );
enrichWithQuestion( questionComment, dto ); // ← look at it
return questionComment;
}
What made me post this question was largely the fact that a chat bot said Mapstruct wouldn't pass the source object because
Any source parameters used in the regular mapping method are not passed to the @AfterMapping method, as they are not needed for any further mapping operations at that point.
It appears the chat bot was mistaken on that
|
76388819 | 76389035 | I need to retrieve a big amount of data.
I'm trying to order by 'id'. But the query returns an empty collection. if remove orderBy('id') it works properly.
How to sort by document id?
mQuery = docRef.orderBy('id')
.limit(bulk_size)
.get();
| Firebase firestore db orderBy document id | When you call .orderBy('id') it means that you trying to order the documents that you get from Firestore according to a field called id. If you want to order the documents according to the document ID, then please use:
mQuery = docRef.orderBy(firebase.firestore.FieldPath.documentId())
.limit(bulk_size)
.get();
|
76390160 | 76390445 | I got a Dashboard model that is filled by a schduled Job.
dashboard model
class dashboard(models.Model):
topvul1=models.IntegerField(default=0)
topvul2=models.IntegerField(default=0)
topvul3=models.IntegerField(default=0)
I want to show the most found, second most found and third most found VID from the clientvul model. And fill it once per Day to my dashboard model.
clientvul model
class clientvul(models.Model):
client= models.ForeignKey(client, on_delete=models.CASCADE)
vid=models.ForeignKey(vul, on_delete=models.CASCADE)
path=models.CharField(max_length=1000)
product=models.CharField(max_length=1000)
isactive=models.BooleanField(default=True)
class Meta:
constraints = [
models.UniqueConstraint(
fields=['client', 'VID'], name='unique_migration_host_combination' # legt client und VID als Primarykey fest
)
]
| DJANGO Get First, Second and Third most found Value in a Model | You can count the number of clientvuls for each vul and then order and return the first three:
from django.db.models import Count
vul.objects.alias(
num_client=Count('clientvul_set')
).order_by('-num_client')[:3]
there is no need to make a model for this. Unless we are talking about billions of records, such queries run in milliseconds, and are more robust since it will also change the order in case a clientvul is removed for example.
|
76390300 | 76390453 | I have a loop that loads two float* arrays into __m256 vectors and processes them. Following this loop, I have code that loads the balance of values into the vectors and then processes them. So there is no alignment requirement on the function.
Here is the code that loads the balance of the data into the vectors:
size_t constexpr FLOATS_IN_M128 = sizeof(__m128) / sizeof(float);
size_t constexpr FLOATS_IN_M256 = FLOATS_IN_M128 * 2;
...
assert(bal < FLOATS_IN_M256);
float ary[FLOATS_IN_M256 * 2];
auto v256f_q = _mm256_setzero_ps();
_mm256_storeu_ps(ary, v256f_q);
_mm256_storeu_ps(&ary[FLOATS_IN_M256], v256f_q);
float *dest = ary;
size_t offset{};
while (bal--)
{
dest[offset] = p_q_n[pos];
dest[offset + FLOATS_IN_M256] = p_val_n[pos];
offset++;
pos++;
}
// the two vectors that will be processed
v256f_q = _mm256_loadu_ps(ary);
v256f_val = _mm256_loadu_ps(&ary[FLOATS_IN_M256]);
When I use Compiler Explorer, set to "x86-64 clang 16.0.0 -march=x86-64-v3 -O3" the compiler unrolls the loop when the assert(bal < FLOATS_IN_M256); line is present. However, assert() is ignored in RELEASE mode, meaning the loop won't be vectorized and unrolled.
To test, I defined NDEBUG and the loop is vectorized and unrolled.
I have tried adding the following in the appropriate places, but they don't work:
#pragma clang loop vectorize(enable)
#pragma unroll
#undef NDEBUG
The compiler should be able to see from the code before the snippet above that bal < 8 but it doesn't. How can I tell it this assertion is true when not in DEBUG mode?
| Give the CLANG compiler a loop length assertion | You can use __builtin_assume to give the compiler constraint information that is not explicitly in the code. This should work for gcc and clang.
In the posted code, just replace the assert with __builtin_assume(bal < FLOATS_IN_M256).
|
76391255 | 76391923 | I have a data file having multiple date fields coming in string data type. I am trying to validate the date field and discard the records having wrong date format. Data looks like below.
schema = StructType([StructField("id",StringType(),True), \
StructField("dt1",StringType(),True), \
StructField("dt2",StringType(),True)])
df = spark.createDataFrame([(1, "01/22/2010","03/25/2012"), (2, "01/12/2014",None),(3,"04/09/2011","12/23"),(5,None,"01/22/2010"),(6,"2005/12/04","2000/12/04"),(7,"01/01/2020","30/12/2019"),(8,"12/1999/21","05/01/2021"),(9,"12/2013/21",None),(9,None,None)], schema=schema)
Only the two date formats "M/d/y", "M/yy" are allowed to be passed in the validation. Records that do not pass the validation will be loaded in the error table.
This is a sample data. In actual file there are n numbers of date fields present. I am trying to get a function which can do this validation on all the date fields.
| Date Validation in Pyspark | I think you can try to parse the date and then filter rows with None values, you can have a list of date columns to be more generic
from pyspark.sql.functions import coalesce, to_date, col
from pyspark.sql.types import StructField, StructType, StringType
schema = StructType([StructField("id",StringType(),True), \
StructField("dt1",StringType(),True), \
StructField("dt2",StringType(),True)])
df = spark.createDataFrame([(1, "01/22/2010","03/25/2012"), (2, "01/12/2014",None),(3,"04/09/2011","12/23"),(5,None,"01/22/2010"),(6,"2005/12/04","2000/12/04"),(7,"01/01/2020","30/12/2019"),(8,"12/1999/21","05/01/2021"),(9,"12/2013/21",None),(9,None,None)], schema=schema)
def custom_to_date(col):
formats = ("M/d/y", "M/yy")
return coalesce(*[to_date(col, f) for f in formats])
# Set CORRECTED mode to deal with invalid dates as None
spark.sql("set spark.sql.legacy.timeParserPolicy=CORRECTED")
df.cache()
date_columns = ["dt1", "dt2"]
valid_df = df
for c in date_columns:
valid_df = valid_df.filter(custom_to_date(col(c)).isNotNull())
error_df = df.subtract(valid_df)
error_df.show()
+---+----------+----------+
| id| dt1| dt2|
+---+----------+----------+
| 2|01/12/2014| null|
| 5| null|01/22/2010|
| 6|2005/12/04|2000/12/04|
| 7|01/01/2020|30/12/2019|
| 8|12/1999/21|05/01/2021|
| 9|12/2013/21| null|
| 9| null| null|
+---+----------+----------+
valid_df.show()
+---+----------+----------+
| id| dt1| dt2|
+---+----------+----------+
| 1|01/22/2010|03/25/2012|
| 3|04/09/2011| 12/23|
+---+----------+----------+
Update: If we need to replace the String columns as well with the parsed dates then we can do something like this:
from pyspark.sql.functions import coalesce, to_date, col
from pyspark.sql.types import StructField, StructType, StringType
schema = StructType([StructField("id",StringType(),True), \
StructField("dt1",StringType(),True), \
StructField("dt2",StringType(),True)])
df = spark.createDataFrame([(1, "01/22/2010","03/25/2012"), (2, "01/12/2014",None),(3,"04/09/2011","12/23"),(5,None,"01/22/2010"),(6,"2005/12/04","2000/12/04"),(7,"01/01/2020","30/12/2019"),(8,"12/1999/21","05/01/2021"),(9,"12/2013/21",None),(9,None,None)], schema=schema)
def custom_to_date(col):
formats = ("M/d/y", "M/yy")
return coalesce(*[to_date(col, f) for f in formats])
# Set CORRECTED mode to deal with invalid dates as None
spark.sql("set spark.sql.legacy.timeParserPolicy=CORRECTED")
df.cache()
date_columns = ["dt1", "dt2"]
for c in date_columns:
df = df.withColumn(c, custom_to_date(col(c)))
valid_df = df
for c in date_columns:
valid_df = valid_df.filter(col(c).isNotNull())
error_df = df.subtract(valid_df)
>>> valid_df.show()
+---+----------+----------+
| id| dt1| dt2|
+---+----------+----------+
| 1|2010-01-22|2012-03-25|
| 3|2011-04-09|2023-12-01|
+---+----------+----------+
>>> error_df.show()
+---+----------+----------+
| id| dt1| dt2|
+---+----------+----------+
| 2|2014-01-12| null|
| 5| null|2010-01-22|
| 6| null| null|
| 7|2020-01-01| null|
| 8| null|2021-05-01|
| 9| null| null|
+---+----------+----------+
|
76388794 | 76389049 | Suppose I've a simple model class:
public class Car
{
public string Make { get; init; }
public string Model { get; init; }
public string Year { get; init; }
}
In my ViewModel, I've two lists:
public class ViewModel
{
public ObservableCollection<Car> Cars { get; }
public List<Car> CanBeSold { get; }
public ViewModel()
{
Car car1 = new() { Make = "Toyota", Model = "Corolla", Year = "2020" };
Car car2 = new() { Make = "Honda", Model = "Civic", Year = "2021" };
Car car3 = new() { Make = "Mitsubishi", Model = "Lancer", Year = "2017" };
Cars = new();
CanBeSold = new();
Cars.Add(car1);
Cars.Add(car2);
Cars.Add(car3);
CanBeSold.Add(car2);
}
}
In my view, I'm bidining a ListView to the Cars collection:
<ListView ItemsSource="{Binding Cars}">
<ListView.View>
<GridView>
<GridViewColumn Header="Make" DisplayMemberBinding="{Binding Path=Make}"/>
<GridViewColumn Header="Model" DisplayMemberBinding="{Binding Path=Model}"/>
<GridViewColumn Header="Year" DisplayMemberBinding="{Binding Path=Year}"/>
<GridViewColumn Header="Can Be Sold"/>
</GridView>
</ListView.View>
</ListView>
How can I also show a Yes/No based on if the Car is in the list CanBeSold?
Thanks for any help.
| ListView Item - Generate column value based on criteria not part of Item's class | You may use a MultiBinding that contains a Binding to the CanBeSold property of the parent view model and a Binding to the current Car element.
<GridViewColumn Header="Can Be Sold">
<GridViewColumn.DisplayMemberBinding>
<MultiBinding>
<MultiBinding.Converter>
<local:ListElementConverter/>
</MultiBinding.Converter>
<Binding Path="DataContext.CanBeSold"
RelativeSource="{RelativeSource AncestorType=ListView}"/>
<Binding />
</MultiBinding>
</GridViewColumn.DisplayMemberBinding>
</GridViewColumn>
The Binding Converter checks if the element is contained in the list:
public class ListElementConverter : IMultiValueConverter
{
public object Convert(
object[] values, Type targetType, object parameter, CultureInfo culture)
{
return values.Length == 2 &&
values[0] is IList list &&
list.Contains(values[1])
? "Yes"
: "No";
}
public object[] ConvertBack(
object value, Type[] targetTypes, object parameter, CultureInfo culture)
{
throw new NotSupportedException();
}
}
|
76389832 | 76390470 | Trying to add, subtract, two Series that contains datatype List[i64]. The operation seems to be not supported.
a = pl.Series("a",[[1,2],[2,3]])
b = pl.Series("b",[[4,5],[6,7]])
c = a+b
this gives the error:
PanicException: `add` operation not supported for dtype `list[i64]`
I would expect a element-wise sum, like would happen with numpy array for example:
c = [[5,7],[8,10]]
What's the correct syntax to add two series of lists?
| Polars - How to add two series that contain lists as elements | you can do the following:
c = (a.explode() + b.explode()).reshape((2,-1)).alias('c')
shape: (2,)
Series: 'a' [list[i64]]
[
[5, 7]
[8, 10]
]
Final thoughts: if your list has a fixed size, then you might consider using the new Polars Array datatype.
|
76383169 | 76391967 | My API allows users to upload and download files to my Azure Storage account. To do this, they need a SAS token with permissions based on if they want to download or upload a file. I was wondering if there was a secure method to provide users with these tokens, other than sending it through more unsecure methods such as email.
Edit for Clarification:
I plan on having hundreds of users accessing my Azure Storage account. I was planning on generating my token through Azure itself but I have been considering generating the SAS token inside of the API or in a separate Azure Function. My API uses an Azure Function with NodeJS.
| Is there a secure way to provide users with Shared Access Signature Tokens for Azure Storage containers? | Proposal 1: You can create a new Azure function as a proxy on your storage account for uploading/downloading. Thanks to managed identity, you won't have to provide a SAS token. User authorization on the Azure Function will ensure that the permission is removed when the user is no longer authorized.
Proposal 2: You can create a SAS token with an Azure Function and send it to the user inside your application (can be transparent to the user). This will enable you to create a SAS token with a short lifetime. If communication between clients and server uses TLS, it will guarantee secure transmission of your token.
|
76387974 | 76389074 | I'm migrating old Quarkus project from RestEasy to ResteasyReactive and I have some difficulties migrating ResteasyContext.pushContext since there is no real 1:1 alternative in rest easy.
I'm using the ResteasyContext.pushContext in my ContainerRequestFilter to push some custom object to Context and later retrieve it using @Context.
Something like in this minimal example i provided.
Filter:
package org.acme.filter;
import org.acme.pojo.CustomHttpRequest;
import org.jboss.resteasy.core.ResteasyContext;
import javax.enterprise.context.ApplicationScoped;
import javax.ws.rs.container.ContainerRequestContext;
import javax.ws.rs.container.ContainerRequestFilter;
import javax.ws.rs.ext.Provider;
import java.time.LocalDateTime;
import java.util.Random;
@Provider
@ApplicationScoped
public class HttpRequestFilter implements ContainerRequestFilter {
@Override
public void filter(ContainerRequestContext requestContext) {
CustomHttpRequest request = CustomHttpRequest.builder()
.headers(requestContext.getHeaders())
.dateTime(LocalDateTime.now())
.text("Some random text for example " + new Random().nextInt(100))
.build();
ResteasyContext.pushContext(CustomHttpRequest.class, request);
}
}
Custom object I want to push to context:
package org.acme.pojo;
import lombok.Builder;
import lombok.Getter;
import lombok.ToString;
import javax.ws.rs.core.MultivaluedMap;
import java.time.LocalDateTime;
@Getter
@Builder
@ToString
public class CustomHttpRequest {
private String text;
private LocalDateTime dateTime;
private MultivaluedMap<String, String> headers;
private boolean secured;
}
And the later read it from context in my rest endpoint:
package org.acme;
import org.acme.pojo.CustomHttpRequest;
import org.acme.pojo.ResponseData;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
@Path("/hello")
public class GreetingResource {
@GET
@Path("{pathText}")
@Produces(MediaType.APPLICATION_JSON)
public ResponseData testContext(@Context CustomHttpRequest httpRequest,
@PathParam("pathText") String queryText) {
return ResponseData.builder()
.queryText(queryText)
.httpRequestText(httpRequest.getText())
.secured(httpRequest.isSecured())
.build();
}
}
Here is the full example on GitHub: https://github.com/pkristja/resteasy_context/tree/main
I have found some alternatives that work with RestEasyReactive like using ContainerRequestContext and setting the data using setProperty.
Build file changes:
Changed from implementation("io.quarkus:quarkus-resteasy-jackson") to implementation("io.quarkus:quarkus-resteasy-reactive-jackson")
Filter for setting oblect to context:
package org.acme.filter;
import org.acme.pojo.CustomHttpRequest;
import javax.enterprise.context.ApplicationScoped;
import javax.ws.rs.container.ContainerRequestContext;
import javax.ws.rs.container.ContainerRequestFilter;
import javax.ws.rs.core.Context;
import javax.ws.rs.ext.Provider;
import java.time.LocalDateTime;
import java.util.Random;
@Provider
@ApplicationScoped
public class HttpRequestFilter implements ContainerRequestFilter {
@Context
ContainerRequestContext crContext;
@Override
public void filter(ContainerRequestContext requestContext) {
CustomHttpRequest request = CustomHttpRequest.builder()
.headers(requestContext.getHeaders())
.dateTime(LocalDateTime.now())
.text("Some random text for example " + new Random().nextInt(100))
.build();
crContext.setProperty("customHttpRequest", request);
}
}
Retrieving object from context:
package org.acme;
import org.acme.pojo.CustomHttpRequest;
import org.acme.pojo.ResponseData;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.container.ContainerRequestContext;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
@Path("/hello")
public class GreetingResource {
@GET
@Path("{pathText}")
@Produces(MediaType.APPLICATION_JSON)
public ResponseData testContext(@Context ContainerRequestContext crContext,
@PathParam("pathText") String queryText) {
CustomHttpRequest httpRequest = (CustomHttpRequest) crContext.getProperty("customHttpRequest");
return ResponseData.builder()
.queryText(queryText)
.httpRequestText(httpRequest.getText())
.secured(httpRequest.isSecured())
.build();
}
}
Is there any way to get same functionality in RestEasyReactive like you had in RestEasy using ResteasyContext.pushContext because it's really verbose and inefficient to retrieve each object from context and cast it because in my real example I have multiple custom objects pushed to context with ResteasyContext.pushContext.
Thank you!
| Migration from RestEasy to RestEasyReactive with ResteasyContext and ContainerRequestFilter | When using RESTEasy Reactive, there is a far easier way of doing things like this: just use a CDI request scoped bean.
Something like the following should be just fine:
@Singleton
public class CustomHttpRequestProducer {
@RequestScoped
@Unremovable
public CustomHttpRequest produce(HttpHeaders headers) {
return new CustomHttpRequest(headers.getRequestHeaders(), LocalDateTime.now(), "dummy");
}
}
Then you would use it in your JAX-RS Resource as easily as:
@GET
@Produces(MediaType.TEXT_PLAIN)
public String hello(@Context CustomHttpRequest customHttpRequest) {
return customHttpRequest.getText();
}
Note that @Unremovable is only needed if you use CustomHttpRequest as method parameter.
If you however inject it as a field, @Unremovable is unnecessary.
UPDATE
After https://github.com/quarkusio/quarkus/pull/33793 becomes part of Quarkus (likely in 3.2) then @Unremovable will no longer be necessary even for the method parameter
|
76391970 | 76392001 | I want to search for an index of the element in the ArrayList but I want to start searching from starting index different than 0.
I tried like this:
import java.util.*;
public class Test{
public static void main(String[] args) {
ArrayList<String> bricks = new ArrayList<String>(List.of("BBBB","CCCC","DDDD"));
System.out.println(bricks.subList(1, bricks.size()).indexOf("CCCC"));
}
}
Output:
0
Expected output:
1
I want to start searching for "CCCC" in "bricks" from the starting index "1" not from "0"
| How to find an index of the ArrayList from the starting index in Java? | Your code finds the index within the sublist.
To find the index within the original list, add the index used to create the sublist to the result:
System.out.println(bricks.subList(1, bricks.size()).indexOf("CCCC") + 1);
Some refactoring makes this clearer:
public static <T> int indexOfAfter(List<T> list, T item, int from) {
int result = list.subList(from, list.size()).indexOf(item);
return result == -1 ? -1 : (from + result);
}
|
76388533 | 76389099 | I have a data loaded into Power Bi that look like this:
ID
TYPE
Product 1
Product 2
Product 3
1
A
1
1
0
1
B
0
0
1
2
A
0
1
1
2
B
1
0
1
3
A
1
0
0
So every column besides the "ID" and "TYPE" columns, is a binary 0/1 column, that indicates whether certain person acquire given product.
What I want to do, is to create a single dropdown slicer with Product 1, Product 2 and Product 3 values, that will filter only certain persons which acquire selected Product.
| How do I create a single slicer from a group of columns | If you cant get a table with one product column before you connect to Power-BI and everything else fails you can to the following:
In the query editor create a new query and refer it to your table above
= YourTableName
So you get the table 3 times. Now for table one delete the columns 4 & 5, for table two delete columns 3 & 5 and for table three delete column 3 & 4. Then rename all columns identical ID, Type, Product.
In the last step you under Home > Append Queries you can append all three tables. Now you got your final table with just one Product column.
|
76391852 | 76392026 | (if you are only interested in the problem, then go to "What if in short?")
What kind of stupid question?
I'm doing work and before that I built all graphics with x and I don't want to change the style.
And now I need a histogram, but it does not suit me with ggplot2.
What do I mean?
I took the width of the column from hist(), so there will be the same number of them
(which can be seen from the graphs),
but in hist() and as I want,
the bars do NOT cross the important/magic number 0.0012,
and in `geom_histogramm' intersects.
And if it's short?
How to "shift" histogram bars with ggplot2 so that they do not cross a certain number (0.0012)?
Or, how to make a histogram shorter with "data" from hist() and design with ggplot2?
Here is my code:
# check bin width
standart_hist <- hist(my_vector, plot = F)
bw <- standart_hist$breaks[2] - standart_hist$breaks[1]
# create hist with ggplot and bw from standart hist
gghist <- ggplot(mapping = aes(my_vector)) +
geom_histogram(
binwidth = bw,
color = "black",
fill = "white"
)
and result:
my hist
standard hist
FIX:
from joran --- instead of geom_histogram() use stat_bin() as here:
stat_bin(geom = 'bar',breaks = <breaks vector from hist() output>)
My data:
my_vector <- (0.001201367, 0.001199250, 0.001198337, 0.001199200, 0.001199353, 0.001198439, 0.001202447, 0.001205639, 0.001207056, 0.001209714, 0.001204478, 0.001200064, 0.001199386, 0.001199976, 0.001200569, 0.001204738, 0.001208508, 0.001201491, 0.001200995, 0.001199861, 0.001200242, 0.001196367, 0.001200365, 0.001201807, 0.001194364, 0.001197196, 0.001192705, 0.001196178, 0.001192991, 0.001189777, 0.001194227, 0.001197158, 0.001204336, 0.001201081, 0.001201100, 0.001204755, 0.001198810, 0.001202090, 0.001194370, 0.001188529, 0.001191450, 0.001193616, 0.001195733, 0.001198886, 0.001201353, 0.001206878, 0.001201262, 0.001194806, 0.001196192, 0.001193215, 0.001195030, 0.001198202, 0.001184351, 0.001191890, 0.001192882, 0.001194621, 0.001203256, 0.001204150, 0.001197425, 0.001198002, 0.001196185, 0.001194915, 0.001198281, 0.001201858, 0.001195349, 0.001196401, 0.001205476, 0.001201740, 0.001197276, 0.001189442, 0.001192760, 0.001196846, 0.001201342, 0.001204854, 0.001202979, 0.001203136, 0.001199926, 0.001197398, 0.001199905, 0.001199252, 0.001198486, 0.001197114, 0.001196829, 0.001200228, 0.001199666, 0.001194918, 0.001204005, 0.001201363, 0.001204183, 0.001205889, 0.001204553, 0.001202369, 0.001203922, 0.001197001, 0.001200020, 0.001202672, 0.001201746, 0.001203532, 0.001198699, 0.001200975, 0.001202635, 0.001203121, 0.001190614, 0.001199029, 0.001200372, 0.001193731, 0.001193428, 0.001200259, 0.001195203, 0.001194854, 0.001193173, 0.001198266, 0.001195362, 0.001195252, 0.001201008, 0.001199291, 0.001196653, 0.001200357, 0.001201623, 0.001207463, 0.001199381, 0.001198047, 0.001196305, 0.001200419, 0.001208689, 0.001197434, 0.001193885, 0.001198708, 0.001204741, 0.001204281, 0.001193663, 0.001200234, 0.001203809, 0.001199003, 0.001195127, 0.001192189, 0.001187610, 0.001191390, 0.001200602, 0.001197817, 0.001202045, 0.001203998, 0.001205508, 0.001201051, 0.001202057, 0.001208911, 0.001203928, 0.001202267, 0.001201434, 0.001202647, 0.001210024, 0.001210509, 0.001207881, 0.001206928, 0.001206128, 0.001203866, 0.001202204, 0.001204511, 0.001202310, 0.001197504, 0.001199019, 0.001200713, 0.001204197, 0.001204649, 0.001207965, 0.001201847, 0.001200585, 0.001203446, 0.001195972, 0.001202405, 0.001197182, 0.001191603, 0.001197663, 0.001202259, 0.001201008, 0.001200354, 0.001198090, 0.001193479, 0.001202457, 0.001201156, 0.001196038, 0.001201092, 0.001205488, 0.001212173, 0.001203497, 0.001208846, 0.001198349, 0.001200047, 0.001200799, 0.001206939, 0.001207142, 0.001201970, 0.001202742, 0.001204795, 0.001198463, 0.001201559, 0.001201344, 0.001206085, 0.001205526, 0.001197508)
| Recreate hist() binning in ggplot2 with geom_histogram() | Using your data, I believe this does what you want:
h <- hist(my_vector)
ggplot(data = data.frame(x = my_vector),aes(x = x)) +
stat_bin(geom = 'bar',breaks = h$breaks)
|
76388593 | 76389100 | There are a lot of topics around how can we configure the Jackson to ignore additional properties during the un-marshaller process. But I didn't find any answer how can we ignore it but also collect the unrecognized properties.
Our flow is: We would like to ignore them but collect all of the unrecognized properties in order to be aware of these properties and fix them.
Somebody knows how can I achieve this?
| Jackson ignore unrecognized property and collect the errors | Using override DeserializationProblemHandler.handleUnknownProperty get unknown property name?
public class IgnoreUnknownPropertiesHandler extends DeserializationProblemHandler {
@Override
public boolean handleUnknownProperty(DeserializationContext ctxt, JsonParser p, JsonDeserializer<?> deserializer, Object beanOrClass, String propertyName) throws IOException {
// print ignored property
System.out.println("Ignored property: " + propertyName);
return true;
}
}
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
objectMapper.addHandler(new IgnoreUnknownPropertiesHandler());
|
76390394 | 76390482 | I use the * command to fill the search register (/) with the current word (under cursor) so I don't have to paste it into the substitute command.
To do a find and replace I can do it quickly like so:
:%s//MyNewValue/g
instead of
:%s/MyOldValue/MyNewValue/g
But sometimes I just want to change one character in the word (like a typo). So after I used * on the word, I do the following:
:%s//<c-r>//g
But I get this:
:%s//\<MyOldValue\>/g
because the / register contains \<MyOldValue\>.
So, here's my question:
How can I get rid of these \< and \>? Or is there a better way to edit all occurrences of a word in vim?
The only way I found, it's to yank the word and paste it twice in the substitute pattern.
yiw
:%s/<c-r>"/<c-r>"/g
Also, what do \< and \> mean and what are they use for?
| Why nvim (vim) adds \< and \> to search pattern and what they means? | The \< and \> are word boundaries, see :help \< or :help \>.
This is similar to the "match whole word" checkbox in the search dialog of graphical editors.
For example, it will do the following:
Put the cursor on "foo" and press star.
It will match foo in this line.
But not foobar in this one.
If you don't want this, use g* instead of *. It will not add the word boundaries.
If you're using the :substitute command, you may as well use Ctrl+RCtrl+W (Ctrl+RCtrl+A for words-with-non-word-characters) to add the word under your cursor to the command line. In Vim, one would write it like this:
:%s/<C-R><C-W>/MyNewValue/
|
76388334 | 76389142 | I try to save data on Firebase but the data is stored in strange way although the value is string like this:
Description
"TextEditingController#d9024(TextEditingValue(text: ┤├, selection:
TextSelection.invalid, composing: TextRange(start: -1, end: -1)))"
Price
"TextEditingController#c11e0(TextEditingValue(text: ┤├, selection:
TextSelection.invalid, composing: TextRange(start: -1, end: -1)))"
Title
"TextEditingController#dcebb(TextEditingValue(text: ┤├, selection:
TextSelection.invalid, composing: TextRange(start: -1, end: -1)))"
UID
"gWYqXqsvTdVI1Sfa8MP4SrFTEmB2"
here is my code:
final adTitleController = TextEditingController();
final priceController = TextEditingController();
final adDescription = TextEditingController();
Future<void> createAds() async {
CollectionReference ads = FirebaseFirestore.instance.collection('Ads');
String? adid = ads.id.toString();
String? title = adTitleController.toString();
String? desc = adDescription.toString();
String? price = priceController.toString();
String? userId;
String? adId;
ads.add({
"UID": FirebaseAuth.instance.currentUser?.uid,
"AdId": adId,
"Title": title,
"Description": desc,
"Price": price,
});
}
| Firestore save data in strange formation using Flutter | In order to get a text from the controller you need to use text property of TextEditingController
String? title = adTitleController.text.toString();
String? desc = adDescription.text.toString();
String? price = priceController.text.toString();
|
76391687 | 76392032 | I've got this TypeScript error and I don't fully understand what's going on:
src/helpers.ts:11:14 - error TS2322: Type '<T extends "horizontal" | "vertical" | undefined, U extends AriaRole | undefined>(ariaOrientation: T, role: U) => "horizontal" | "vertical" | NonNullable<T> | "both"' is not assignable to type 'ResolveOrientationFunction'.
Type '"horizontal" | "vertical" | NonNullable<T> | "both"' is not assignable to type 'NonNullable<T> | "both"'.
Type '"horizontal"' is not assignable to type 'NonNullable<T> | "both"'.
Here is my function:
import { type HTMLAttributes } from "react";
type ResolveOrientationFunction = <
T extends HTMLAttributes<HTMLElement>["aria-orientation"],
U extends HTMLAttributes<HTMLElement>["role"]
>(
ariaOrientation: T,
role: U
) => "both" | NonNullable<T>;
export const resolveOrientation: ResolveOrientationFunction = (ariaOrientation, role) => {
if (ariaOrientation === undefined) {
switch (role) {
case "menubar":
case "slider":
case "tablist":
case "toolbar": {
return "horizontal";
}
case "listbox":
case "menu":
case "scrollbar":
case "tree": {
return "vertical";
}
}
}
return ariaOrientation ?? "both";
};
The function is supposed to return "both" | "horizontal" | "vertical".
HTMLAttributes<HTMLElement>["aria-orientation"] is actually "horizontal" | "vertical" | undefined and HTMLAttributes<HTMLElement>["role"] is React.AriaRole | undefined.
I'm actually trying to make this function match the type "both" | NonNullable<HTMLAttributes<HTMLElement>["aria-orientation"]>.
| Struggling to type a TypeScript function | Your ResolveOrientationFunction type definition,
type ResolveOrientationFunction = <
T extends HTMLAttributes<HTMLElement>["aria-orientation"],
U extends HTMLAttributes<HTMLElement>["role"]
>(
ariaOrientation: T,
role: U
) => "both" | NonNullable<T>;
is generic in both T, the type of ariaOrientation, and U, the type of role. It returns a value of type "both" | NonNullable<T>. So if T is undefined because ariaOrientation is undefined, then the function must return "both" | NonNullable<undefined> which is "both". But your implementation doesn't do that. It can instead return "horizontal" or "vertical" depending on role.
So your resolveOrientation function is not a valid ResolveOrientation.
It's not clear that you need the function to be generic at all. Certainly the U type parameter isn't useful as written, since it has no effect on the return type. And you don't really want the T type parameter to be reflected directly in the output type either. It seems like your return type should just be "both" | "vertical" | "horizontal" without reference to T or U. And if you have a generic function where there's no obvious dependency on the type parameters, then you might not want a generic function in the first place.
If you change the generics to specific types like this:
type AriaOrientation = HTMLAttributes<HTMLElement>["aria-orientation"];
type AriaRole = HTMLAttributes<HTMLElement>["role"]
type ResolveOrientationFunction =
(ariaOrientation: AriaOrientation, role: AriaRole) =>
"both" | NonNullable<AriaOrientation>;
Then your function compiles cleanly:
export const resolveOrientation: ResolveOrientationFunction = (ariaOrientation, role) => {
if (ariaOrientation === undefined) {
switch (role) {
case "menubar":
case "slider":
case "tablist":
case "toolbar": {
return "horizontal";
}
case "listbox":
case "menu":
case "scrollbar":
case "tree": {
return "vertical";
}
}
}
return ariaOrientation ?? "both";
};
Playground link to code
|
76390348 | 76390491 | I created a Website for my Wife's Blog in React and am hosting it on Firebase. I also use DB from firebase.
After the site release I found out that Google is not waiting until all data is finally loaded from the site to index the final site, but is trying to index a loading site.
For now I solved the issue with prerender with the following package (as per simple implementation in our site):
https://github.com/egoist/presite
The Issue of it is it prerenders only the English (default) site and not any other languages so indexing is mostly done on English sites (other languages are ignored) and as the site is prerendered Google sometimes loses the index of the site because of it.
I also checked multiple other prerender options but currently no other has the simplicity of implementation and support of dynamic site (some sites --> each recipe has a template site that is loaded from DB and rendered on the client).
Any Ideas how to solve these issue? Possible to use without prerender and get Google wait on loading data for the site?
For Information/check the site itself: https://fromapot.com/
| Is there a way to make Google wait until all data is loaded for my React website hosted on Firebase with a DB from Firebase before indexing? | SEO is known problem of SPA websites built with front-end frameworks/libraries. The only options are to: prerender or use SSR (Server Side Rendering).
For your case, I would suggest using the second option since you want to have dynamic indexing depending on language.
When using SSR, you typically need a NodeJS server that listens to requests and creates the initial layout based on the requested route on the server. This layout is then sent to the front-end along with some additional information, which allows the rest of the website to function properly.
Most easy and powerful way to do that is NextJS framework, but depending on your build system, you may have other options. For example, for Vite, there is a simple plugin which enables SSR with almost zero effort.
There is no way to configure that Google waits for dynamic front-end render because it will require Google to support JS runtime which is too heavy for indexing engine which visits probably millions of websites each day.
|
76388160 | 76389158 | I have a function
def test(self):
tech_line = self.env['tech_line']
allocated_technician = self.env['allocated_technician']
users = self.env['res.users']
tech_line = tech_line.search(
[('service_type_id', '=', self.service_type_id.id)])
al_a6 = self.env['tech_line'].filtered(lambda rec: rec.service_type_id.id == self.service_type_id.id)
area = []
area_new = []
for tec in tech_line:
territory = self.env['territory']
territories = territory.search(
[('technicians', 'in', tec.technician_allociated_id.user_id.id)])
territories_lam = self.env['territory'].filtered(
lambda t_lam: t_lam.technicians.id in tec.technician_allociated_id.user_id.id)
for territory in territories:
area.append(territory.id)
for tet in territories_lam:
area_new.append(tet.id)
print('##################33', len(area))
print('%%%%%%%%%%%%%%%%%%%%', len(area_new))
print('$$$$$$$$$$$$$$$$$$$', tech_line)
print('***************8***', al_a6)
this method when executed screen gets loading and I need to optimize this method, please do share your thoughts on how to optimize this code
I cannot limit the value which is generated from the search method as we need all of its value so instead of that I thought to use filtered instead of search method, but when I use filtered it gives an empty recordset. need help with that
| search method optimization for searching field area in odoo15 | You can avoid searching in the for loop by using a search on all users:
def test(self):
tech_line = self.env['tech_line']
tech_lines = tech_line.search(
[('service_type_id', '=', self.service_type_id.id)])
# get all users to avoid search in a for loop
users = tech_lines.mapped("technician_allociated_id.user_id")
# search territories
territories = territory.search([('technicians', 'in', users.ids)])
area = territories.ids
|
76391751 | 76392039 | I have two datasets using r:
df_100= data.frame(siteid=c(seq(1,5,1),conflu=c(3,2,4,5,6),diflu=c(9,2,30,2,5))
df_full= data.frame(siteid=c(seq(1,10,2),conflu=c(6,3,5,2,3),diflu=c(5,9,2,30,7))
If the siteid is the same between df_100 and df_full, I want to take the difference between the conflu columns of each data frame and the same with the diflu columns. I also want that output to be put into a new dataframe, where the siteid is retained and the difference between the columns creates a new column. For example:
df_difference=data.frame(siteid=c(1,3,5), diff_con=c(3,1,-3), diff_dif=c(-4,-18,2))
| if match between column ID in two different datasets, then create a new dataset with the difference of other columns r | I don't follow the calculations to get what you have as the sample output, but based on your description:
library(dplyr)
df_100 <- data.frame(siteid= seq(1,5,1),conflu=c(3,2,4,5,6),diflu=c(9,2,30,2,5))
df_full <- data.frame(siteid = seq(1,10,2),conflu=c(6,3,5,2,3),diflu=c(5,9,2,30,7))
df_difference <- df_100 |>
inner_join(df_full, by = "siteid", suffix = c("_100", "_full")) |>
mutate(
diff_con = conflu_full - conflu_100,
diff_dif = diflu_full - diflu_100
) |>
select(siteid, diff_con, diff_dif)
inner_join will match and keep only the rows with same "siteid". Then use mutate to do the calculations and select the columns you want.
|
76391960 | 76392049 | I have a function here, where my intent is to add a record to the table. The column name is dynamically defined based on the firstCharVar variable.
The dataframe tblname is a blank table. The first character field in that table is called myvar. There are other columns in that table, and they should remain blank.
#update tables if no records
NoData = function(tblname) {
if (nrow(tblname) == 0) {
#get column name of first character field
allColumns = data.frame(
colName = colnames(tblname),
colIndex = 1:ncol(tblname),
colClass = sapply(tblname, class)
)
charVars = allColumns[allColumns$colClass == 'character', ]
firstCharVar = unfactor(charVars$colName[1])
#run insert statement
#this doesn't work
#Error: unexpected '=' in "tblname = tblname %>% add_row(!!firstCharVar ="
#tblname = add_row(tblname, !!firstCharVar = 'No Data Found')
#but this does
tblname = add_row(tblname, myvar = 'No Data Found')
#clean up stuff used in function
#rm(allColumns, charVars, firstCharVar)
}}
temp2 = NoData(temp2)
| R Studio add_row with dynamic field name | As in other dpylr verbs you could assign values to dynamically created LHS names by using the walrus operator := and !!sym(col) or glue syntax "{col}".
Using a minimal reproducible example based on mtcars:
library(dplyr, warn=FALSE)
col <- "cyl"
mtcars |>
head() |>
add_row(cyl = 1) |>
add_row("{col}" := 2) |>
add_row(!!sym(col) := 3)
#> mpg cyl disp hp drat wt qsec vs am gear carb
#> Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
#> Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
#> Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
#> Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
#> Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
#> Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
#> ...7 NA 1 NA NA NA NA NA NA NA NA NA
#> ...8 NA 2 NA NA NA NA NA NA NA NA NA
#> ...9 NA 3 NA NA NA NA NA NA NA NA NA
|
76390325 | 76390505 | I'm using mix-blend-mode to change the background color of an image. The problem I'm facing is that I've a grey-ish text which overlaps the background image intentionally when shown on a mobile device. This makes however the text unreadable. I've tried all variations of mix-blend-modes and believe my only option is to entirely change the color of the text when it's overlapping.
How can I change the color of a text when it's overlapping another element?
Here is the fiddle:
https://jsfiddle.net/zr8men95/
Relevant portion:
.promo__text {
grid-area: text;
z-index: 1;
margin-left: 1rem;
margin-right: 1rem;
color: #6B7F92;
text-shadow: 0 0 0 black;
}
| Change rgb color of text on overlap | I believe your best shot would be to change the text color when you are on an mobile device. I've read your fiddle and you already have some @media zone defined. We can take advantage of that to add some CSS to change the text color whilst in mobile mode.
This block of code:
@media (min-width: 640px) and (max-width: 1024px) {
.promos {
display: grid;
grid-template-columns: 1fr 1fr;
grid-gap: 1rem;
}
}
Will change for this:
@media (min-width: 640px) and (max-width: 1024px) {
.promos {
display: grid;
grid-template-columns: 1fr 1fr;
grid-gap: 1rem;
}
.promo__text {
color: white; // You can change the color to whatever fits your need
}
}
Hope that help!
|
76391983 | 76392056 | I am using Spring Boot and Spring Data Elastic. I have an accounts index data like below
public interface AccountsRepository extends ElasticsearchRepository<Accounts, Long> {
List<Accounts> findByLastname(String lastname);
List<Accounts> findByAge(Integer age);
}
I am getting below error when perform findAll(). How to fix the below issue?
@Service
public class AccountsService {
@Autowired
private AccountsRepository repository;
public List<Accounts> findAllAccounts(){
return (List<Accounts>) repository.findAll();
}
}
Model
@Data
@AllArgsConstructor
@NoArgsConstructor
@Document(indexName = "accounts", createIndex = false)
public class Accounts {
@Id
private Long account_number;
private Long balance;
private String firstname;
private String lastname;
private Integer age;
private String gender;
private String address;
private String employer;
private String email;
private String city;
private String state;
}
Error:
2023-06-02 21:23:23.784 WARN 44796 --- [nio-8080-exec-1] org.elasticsearch.client.RestClient : request [POST http://localhost:9200/accounts/_search?typed_keys=true&max_concurrent_shard_requests=5&search_type=query_then_fetch&batched_reduce_size=512] returned 1 warnings: [299 Elasticsearch-7.15.0-79d65f6e357953a5b3cbcc5e2c7c21073d89aa29 "Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See https://www.elastic.co/guide/en/elasticsearch/reference/7.15/security-minimal-setup.html to enable security."]
2023-06-02 21:23:23.878 WARN 44796 --- [nio-8080-exec-1] org.elasticsearch.client.RestClient : request [POST http://localhost:9200/accounts/_search?typed_keys=true&max_concurrent_shard_requests=5&search_type=query_then_fetch&batched_reduce_size=512] returned 1 warnings: [299 Elasticsearch-7.15.0-79d65f6e357953a5b3cbcc5e2c7c21073d89aa29 "Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See https://www.elastic.co/guide/en/elasticsearch/reference/7.15/security-minimal-setup.html to enable security."]
2023-06-02 21:23:24.170 ERROR 44796 --- [nio-8080-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.core.convert.ConversionFailedException: Failed to convert from type [java.lang.String] to type [java.lang.Long] for value 'KH-9fIgBhfTLJt8QzMP_'; nested exception is java.lang.NumberFormatException: For input string: "KH-9fIgBhfTLJt8QzMP_"] with root cause
java.lang.NumberFormatException: For input string: "KH-9fIgBhfTLJt8QzMP_"
at java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) ~[na:na]
at java.base/java.lang.Long.parseLong(Long.java:692) ~[na:na]
at java.base/java.lang.Long.valueOf(Long.java:1144) ~[na:na]
at org.springframework.util.NumberUtils.parseNumber(NumberUtils.java:214) ~[spring-core-5.3.21.jar:5.3.21]
at org.springframework.core.convert.support.StringToNumberConverterFactory$StringToNumber.convert(StringToNumberConverterFactory.java:64) ~[spring-core-5.3.21.jar:5.3.21]
at org.springframework.core.convert.support.StringToNumberConverterFactory$StringToNumber.convert(StringToNumberConverterFactory.java:50) ~[spring-core-5.3.21.jar:5.3.21]
at org.springframework.core.convert.support.GenericConversionService$ConverterFactoryAdapter.convert(GenericConversionService.java:437) ~[spring-core-5.3.21.jar:5.3.21]
at org.springframework.core.convert.support.ConversionUtils.invokeConverter(ConversionUtils.java:41) ~[spring-core-5.3.21.jar:5.3.21]
at org.springframework.core.convert.support.GenericConversionService.convert(GenericConversionService.java:192) ~[spring-core-5.3.21.jar:5.3.21]
at org.springframework.core.convert.support.GenericConversionService.convert(GenericConversionService.java:175) ~[spring-core-5.3.21.jar:5.3.21]
at org.springframework.data.elasticsearch.core.convert.MappingElasticsearchConverter$Reader.getPotentiallyConvertedSimpleRead(MappingElasticsearchConverter.java:562) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at org.springframework.data.elasticsearch.core.convert.MappingElasticsearchConverter$Reader.readValue(MappingElasticsearchConverter.java:460) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at org.springframework.data.elasticsearch.core.convert.MappingElasticsearchConverter$Reader.readValue(MappingElasticsearchConverter.java:442) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at org.springframework.data.elasticsearch.core.convert.MappingElasticsearchConverter$Reader$ElasticsearchPropertyValueProvider.getPropertyValue(MappingElasticsearchConverter.java:621) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at org.springframework.data.elasticsearch.core.convert.MappingElasticsearchConverter$Reader.readProperties(MappingElasticsearchConverter.java:404) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at org.springframework.data.elasticsearch.core.convert.MappingElasticsearchConverter$Reader.readEntity(MappingElasticsearchConverter.java:327) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at org.springframework.data.elasticsearch.core.convert.MappingElasticsearchConverter$Reader.read(MappingElasticsearchConverter.java:258) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at org.springframework.data.elasticsearch.core.convert.MappingElasticsearchConverter$Reader.read(MappingElasticsearchConverter.java:217) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at org.springframework.data.elasticsearch.core.convert.MappingElasticsearchConverter.read(MappingElasticsearchConverter.java:161) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at org.springframework.data.elasticsearch.core.convert.MappingElasticsearchConverter.read(MappingElasticsearchConverter.java:83) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at org.springframework.data.elasticsearch.core.AbstractElasticsearchTemplate$ReadDocumentCallback.doWith(AbstractElasticsearchTemplate.java:745) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) ~[na:na]
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) ~[na:na]
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[na:na]
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[na:na]
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) ~[na:na]
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[na:na]
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) ~[na:na]
at org.springframework.data.elasticsearch.core.AbstractElasticsearchTemplate$ReadSearchDocumentResponseCallback.doWith(AbstractElasticsearchTemplate.java:778) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at org.springframework.data.elasticsearch.core.AbstractElasticsearchTemplate$ReadSearchDocumentResponseCallback.doWith(AbstractElasticsearchTemplate.java:763) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at org.springframework.data.elasticsearch.core.ElasticsearchRestTemplate.search(ElasticsearchRestTemplate.java:404) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at org.springframework.data.elasticsearch.repository.support.SimpleElasticsearchRepository.lambda$findAll$1(SimpleElasticsearchRepository.java:123) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at org.springframework.data.elasticsearch.repository.support.SimpleElasticsearchRepository.execute(SimpleElasticsearchRepository.java:355) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at org.springframework.data.elasticsearch.repository.support.SimpleElasticsearchRepository.findAll(SimpleElasticsearchRepository.java:123) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at org.springframework.data.elasticsearch.repository.support.SimpleElasticsearchRepository.findAll(SimpleElasticsearchRepository.java:112) ~[spring-data-elasticsearch-4.4.1.jar:4.4.1]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at org.springframework.data.repository.core.support.RepositoryMethodInvoker$RepositoryFragmentMethodInvoker.lambda$new$0(RepositoryMethodInvoker.java:289) ~[spring-data-commons-2.7.1.jar:2.7.1]
at org.springframework.data.repository.core.support.RepositoryMethodInvoker.doInvoke(RepositoryMethodInvoker.java:137) ~[spring-data-commons-2.7.1.jar:2.7.1]
at org.springframework.data.repository.core.support.RepositoryMethodInvoker.invoke(RepositoryMethodInvoker.java:121) ~[spring-data-commons-2.7.1.jar:2.7.1]
at org.springframework.data.repository.core.support.RepositoryComposition$RepositoryFragments.invoke(RepositoryComposition.java:530) ~[spring-data-commons-2.7.1.jar:2.7.1]
at org.springframework.data.repository.core.support.RepositoryComposition.invoke(RepositoryComposition.java:286) ~[spring-data-commons-2.7.1.jar:2.7.1]
at org.springframework.data.repository.core.support.RepositoryFactorySupport$ImplementationMethodExecutionInterceptor.invoke(RepositoryFactorySupport.java:640) ~[spring-data-commons-2.7.1.jar:2.7.1]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.3.21.jar:5.3.21]
at org.springframework.data.repository.core.support.QueryExecutorMethodInterceptor.doInvoke(QueryExecutorMethodInterceptor.java:164) ~[spring-data-commons-2.7.1.jar:2.7.1]
at org.springframework.data.repository.core.support.QueryExecutorMethodInterceptor.invoke(QueryExecutorMethodInterceptor.java:139) ~[spring-data-commons-2.7.1.jar:2.7.1]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.3.21.jar:5.3.21]
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) ~[spring-aop-5.3.21.jar:5.3.21]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.3.21.jar:5.3.21]
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215) ~[spring-aop-5.3.21.jar:5.3.21]
at com.sun.proxy.$Proxy59.findAll(Unknown Source) ~[na:na]
at com.example.service.AccountsService.findAllAccounts(AccountsService.java:16) ~[classes/:na]
at com.example.controller.AccountsController.findAllAccounts(AccountsController.java:19) ~[classes/:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) ~[spring-web-5.3.21.jar:5.3.21]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150) ~[spring-web-5.3.21.jar:5.3.21]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) ~[spring-webmvc-5.3.21.jar:5.3.21]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895) ~[spring-webmvc-5.3.21.jar:5.3.21]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808) ~[spring-webmvc-5.3.21.jar:5.3.21]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.3.21.jar:5.3.21]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067) ~[spring-webmvc-5.3.21.jar:5.3.21]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963) ~[spring-webmvc-5.3.21.jar:5.3.21]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) ~[spring-webmvc-5.3.21.jar:5.3.21]
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898) ~[spring-webmvc-5.3.21.jar:5.3.21]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:655) ~[tomcat-embed-core-9.0.64.jar:4.0.FR]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) ~[spring-webmvc-5.3.21.jar:5.3.21]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764) ~[tomcat-embed-core-9.0.64.jar:4.0.FR]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat-embed-websocket-9.0.64.jar:9.0.64]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-5.3.21.jar:5.3.21]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.21.jar:5.3.21]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-5.3.21.jar:5.3.21]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.21.jar:5.3.21]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-5.3.21.jar:5.3.21]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.21.jar:5.3.21]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:360) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:890) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1787) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
| How to only get _source fields of elasticsearch using Spring Data Elastic? | You marked the property account_number of your entity, which is a Long, as the @Id. But the id in your index is a String (here with the value "KH-9fIgBhfTLJt8QzMP_").
Add proper id property:
@Id
private String id;
Edit: 03.06.2023:
As for the error
java.lang.ClassCastException: class org.springframework.data.domain.PageImpl cannot be cast to class java.util.List (org.springframework.data.domain.PageImpl
findAll() is defined to return an Iterable, which is an interface. The actual returned objects is of type PageImpl which implements Page which extends Iterable.
You cannot cast a Page implementation to a List<Accounts>.
|
76388624 | 76389229 | I am relativly new to SQL and I have a short question. I already searched for similar questions in stackOverflow but I couldn't find anything.I have created some Views. These Views change from one Version to antother. To make migration easier for the customer, I want to filter differences in data_types of columns between two versions. Currently I'm working PostgreSQL Version 11.16
I have table that looks like this:
versionsnummer
install_timestamp
column_name
data_type
D1
2023-06-02 06:42:14.531588
t0801_01
integer
D1
2023-06-02 06:42:14.531588
t0801_04
character varying
D2
2023-07-02 06:42:14.531588
t0801_01
integer
D2
2023-07-02 06:42:14.531588
t0801_04
integer
Now I want to find all rows where the value of the column data_type has changed between two versions.
So I'm expecting the following result:
versionsnummer
install_timestamp
column_name
data_type
D1
2023-06-02 06:42:14.531588
t0801_04
character varying
D2
2023-06-02 06:42:14.531588
t0801_04
integer
What I've tried is this:
SELECT DISTINCT ON (column_name, data_type) column_name, data_type FROM mytable WHERE versionsnummer = 'D1' OR versionsnummer = 'D2';
Unfortunately I didn't get the expectes Result with this query. Could you please tell me waht i'm doing wrong here?
Thank you very much :)
| How can I filter differences of datasets in a table | I think you can achieve this via "SELF JOIN". Join the tables with itself on "column_name" column.
Here is the code:
SELCT t1.versionsnummer, t1.install_timestamp, t1.column_name, t1.data_type
FROM [your_table_name] t1
JOIN [your_table_name] t2 ON t1.column_name = t2.column_name
WHERE t1.data_type <> t2.data_type;
Example: If you select all columns from the table, you will get:
versionsnummer
install_timestamp
column_name
data_type
D1
2023-06-02 06:42:14.531
t0801_04
character_varing
D2
2023-07-02 06:42:14.531
t0801_04
integer
Tested here:
*Don't forget to change table name in this query!
|
76390367 | 76390507 | Have coded an emacs minor mode with the definition shown below.
Would it be possible to simplify this, to perhaps call a function instead? Or is it not such a big deal having a minor-mode defined in this way ?
What would be the general way to set a minor-mode in terms of functionality (e.g. enable, disable, ...) ?
(defcustom komis-luna-signal t
"todo"
:group 'komis-luna)
;;;###autoload
(define-minor-mode komis-luna-minor-mode
"Uses large geometric shapes for displaying heading levels."
nil nil nil
(let*
(($keyword
`(("^\\*+ "
(0 (let* ( ($kondor
(- (match-end 0) (match-beginning 0) 1))
($inline-task
(and (boundp 'org-inlinetask-min-level)
(>= $kondor org-inlinetask-min-level))) )
;;--------------------------------------
(compose-region (- (match-end 0) 2)
(- (match-end 0) 1)
(komis-luna-shape-select $kondor))
;;---------------------------------------
(when $inline-task
(compose-region (- (match-end 0) 3)
(- (match-end 0) 2)
(komis-luna-shape-select $kondor)))
;;---------------------------------------
(when (facep komis-luna-typeface)
(put-text-property
(- (match-end 0) (if $inline-task 3 2))
(- (match-end 0) 1)
'face komis-luna-typeface))
;;---------------------------------------
(put-text-property
(match-beginning 0)
(- (match-end 0) 2)
'face (list :foreground
(face-attribute 'default :background)))
;;---------------------------------------
(put-text-property (match-beginning 0)
(match-end 0)
'keymap komis-luna-mouse-sweep)
;;---------------------------------------
nil )) ))))
(if komis-luna-signal
(progn
(font-lock-add-keywords nil $keyword)
(font-lock-fontify-buffer))
(save-excursion
(goto-char (point-min))
(font-lock-remove-keywords nil $keyword)
(while (re-search-forward "^\\*+ " nil t)
(decompose-region (match-beginning 0) (match-end 0)) )
(font-lock-fontify-buffer)) ) ))
| Defining an emacs minor mode | I think you did it the proper way.
Next would be to toggle your minor mode in your emacs configuration file .emacs.d/init.el with a hook probably like this :
(add-hook 'org-mode-hook 'komis-luna-minor-mode)
If you want your minor mode to be enable when entering org-mode.
For further and more specifics questions and problems, you should probably take this post to Emacs Stack Exchange.
|
76391607 | 76392057 | I'm trying to pass an array using extravars to the Ansible URI module.
% ansible-playbook someplaybook.yml -e '{ "letters": [ "aa", "ab", "ac", "ad", "ae" ] }'
When I run it through the URI module, I'm getting [ "['aa', 'ab', 'ac', 'ad', 'ae']" ] in the response. How do I send the entire array without the brackets and quotes. I've tried "{{ letters | replace("[", "") | replace("]", "") }}" but that doesn't get rid of the exterior quotes.
- name: Create a letter job
uri:
url: 'http://localhost:8000/api/app'
method: POST
headers:
Content-Type: application/json
body: '{ "letters": [ "{{ letters }}" ] }'
body_format: json
return_content: yes
register: response
Current Output:
"response": {
"id": 2796831356421,
"letters": [
"['aa', 'ab', 'ac', 'ad', 'ae']"
],
"lastError": null
}
Desired Output:
"response": {
"id": 2796831356421,
"letters": [
'aa', 'ab', 'ac', 'ad', 'ae'
],
"lastError": null
}
| How to pass array to Ansible URI module | Your letters variable is a list, but when you write:
"{{ letters }}"
You are asking -- explicitly -- to render it as a simple string. You don't want that; you want to maintain the structure of the data. You can do that like this:
- name: Create a letter job
uri:
url: 'http://localhost:8000/api/app'
method: POST
headers:
Content-Type: application/json
body: '{{ { "letters": letters } }}'
body_format: json
return_content: yes
register: response
Here, we're using a Jinja template ({{ ... }}) to create dictionary with a single key, letters, whose value is the content of your letters variable.
Using this sample application:
from pydantic import BaseModel
from fastapi import FastAPI
class Letters(BaseModel):
letters: list[str]
app = FastAPI()
@app.post("/api/app")
def echo(letters: Letters) -> Letters:
return letters
And the following playbook:
- hosts: localhost
gather_facts: false
tasks:
- name: Create a letter job
uri:
url: 'http://localhost:8000/api/app'
method: POST
headers:
Content-Type: application/json
body: '{{ { "letters": letters } }}'
body_format: json
return_content: yes
register: response
- debug:
var: response
We see as output:
TASK [debug] ********************************************************************************************
ok: [localhost] => {
"response": {
"changed": false,
"connection": "close",
"content": "{\"letters\":[\"aa\",\"ab\",\"ac\",\"ad\",\"ae\"]}",
"content_length": "38",
"content_type": "application/json",
"cookies": {},
"cookies_string": "",
"date": "Fri, 02 Jun 2023 16:42:50 GMT",
"elapsed": 0,
"failed": false,
"json": {
"letters": [
"aa",
"ab",
"ac",
"ad",
"ae"
]
},
"msg": "OK (38 bytes)",
"redirected": false,
"server": "uvicorn",
"status": 200,
"url": "http://localhost:8000/api/app"
}
}
...which I think is exactly what you were after.
|
76388538 | 76389242 | I am using Java 17 and the iText pdf library (5.5.4), I'm currently attempting to write a paragraph on an existing pdf file inside a rectangular area, however I seem to have a NullPointerExeption when invoking the go() method, I'm not sure exactly why. I have included my code, any help would be appreciated.
public class Main {
public static void main(String[] args) {
try {
PdfReader reader = new PdfReader("src/main/resources/test_file.pdf");
PdfStamper stamper = new PdfStamper(reader, new FileOutputStream("src/main/resources/output.pdf"));
PdfContentByte cb = stamper.getOverContent(1);
ColumnText ct = new ColumnText(cb);
ct.setSimpleColumn(new Rectangle(36, 600, 200, 800));
ct.addElement(new Paragraph("I want to add this text in a rectangle defined by the coordinates llx = 36, lly = 600, urx = 200, ury = 800"));
int status = ct.go();
} catch (DocumentException | IOException e) {
throw new RuntimeException(e);
}
}
}
Exception in thread "main" java.lang.NullPointerException: Cannot invoke "com.itextpdf.text.pdf.PdfStructureElement.getAttribute(com.itextpdf.text.pdf.PdfName)" because "this.parent" is null
| Cannot write a paragraph to a pdf file using iText pdf | I could reproduce your issue using your example file with iText 5.5.4. Then I tried again with the current 5.5.13.3. There was no issue. Thus, please update.
Comparing the 5.5.4 code (where the exception occurs) with the corrsponding 5.5.13.3 code, one sees that there indeed was an unconditional call of a method of a parent object but that now there is a call to a helper method that first checks the parent and only calls its method if it isn't null.
This fix has been applied in early 2015, the commit comment was "Fixed NPE when modifying content of TaggedPDF document.".
|
76380856 | 76389538 | I wrote this code to search a specific folder's text files for word matches and to specify them:
import re, os, sys
from pathlib import Path
#Usage: regs directory
try:
if len(sys.argv) == 2:
folder = sys.argv[1]
fList = os.listdir(folder)
uInput = input('input a regex: ')
regObj = re.compile(f'''{uInput}''')
wordReg = re.compile(r'''([A-Za-z0-9]+|\s+|[^\w\s]+)''')
matches = []
print(fList)
for file in fList:
if not os.path.isdir(Path(folder)/Path(file)):
currentFileObj = open(f'{folder}/{file}')
content = currentFileObj.readlines()
currentFileObj.seek(0)
text = currentFileObj.read()
words = wordReg.findall(text)
matches = list(filter(regObj.match, words))
instances = 0
print(f"matches in ({file}):\n'", end='')
for word in words:
if word in matches:
print("\u0333".join(f"{word} "), end='')
else:
print(word, end='')
print("'")
for line in content:
matches = regObj.findall(line)
for match in matches:
print("\u0333".join(f"{match} "), end=' ')
print(f"in line number {content.index(line)+1}")
if match != '':
instances = instances + 1
print(f'number of instances found: {instances}\n')
else:
continue
else:
print('Usage: regs directory')
except FileNotFoundError:
print("that file doesn't exist.")
except PermissionError:
print("you don't have permission to search that folder.")
it works for the most part except for a few regular expressions, if the regular expression has punctuation or a white space character next to other characters it wouldn't underline it, it may work if i find out a way to substitute matches with a modified version of the match (replacing the match with an underlined version)
Anyone knows a fix ?
here's what it looks like for any other regex.
you can see in the first text file it doesn't underline the match (out.)
i tried looking for functions that would substitute matches with a modification of said match, doesn't appear like there's any ?
also there's the minor problems of it not being able to underline whitespaces and punctuation properly, and the underline character doesn't appear in the windows7 command prompt, maybe a different character other than the underline can work ?
| How can I replace matches in a Python regex with a modified version of the match? | I've figured out the answer:
using a lambda function as a repl= variable with re.sub i was capable of modifying the matches and then using them to substitute.
import re, os, sys
from pathlib import Path
#Usage:regs directory
try:
if len(sys.argv) == 2:
folder = sys.argv[1]
fList = os.listdir(folder)
print("folder contents: ", end=' ')
for f in fList:
if not f == fList[-1]:
print(f, end=', ')
else:
print(f, end='.\n\n')
uInput = input('input a regex: ')
print()
regObj = re.compile(f'''{uInput}''')
wordReg = re.compile(r'''([A-Za-z0-9]+|\s+|[^\w\s]+)''')
matches = []
for file in fList:
if os.path.isfile(Path(folder)/Path(file)):
currentFileObj = open(f'{folder}/{file}')
lines = currentFileObj.readlines()
currentFileObj.seek(0)
text = currentFileObj.read()
words = wordReg.findall(text)
matches = list(filter(regObj.match, words))
instances = 0
print(f"matches in ({file}):\n'", end='')
print(regObj.sub(lambda match: "(" + match.group() + ")", text)+"'")
for line in lines:
matches = regObj.findall(line)
for match in matches:
print((f"({match})"), end=' ')
print(f"in line number {lines.index(line)+1}")
if match != '':
instances = instances + 1
print(f'number of instances found: {instances}\n')
else:
continue
else:
print('Usage:regs directory')
except FileNotFoundError:
print("that file doesn't exist.")
except PermissionError:
print("you don't have permission to search that folder.")
instead of having a loop that goes over the list of the string's words, it just prints the match group between parenthesis like so:
print(regObj.sub(lambda match: "(" + match.group() + ")", text)+"'")
The output now looks like this.
it also prints the folder contents now.
|
76389533 | 76390559 | Does a Kafka cluster have some unique ID that I can get programmatically out of a Kafka consumer? I checked for message headers, but it looks like there's no metadata in them by default and I don't see any methods in a KafkaConsumer or ConsumerRecord for retrieving such a value either.
| Kafka consumer unique cluster ID? | You can use AdminClient to get the cluster id, but generally, there's no reason for clients to know internal server information
|
76390462 | 76390585 | I have a query for deployment table. There is no data for hotfix column now. I want to show all change count without hotfix and with hotfix for time intervals.
Table data:
deployTime
changeno
hotfix
2022-08
aaa
2022-08
bbb
2022-11
ccc
2023-01
ddd
First attempted query:
SELECT deployTime AS times ,
COUNT(DISTINCT changeno) AS "Change Count"
FROM deployments
WHERE hotfix = ''
GROUP BY deployTime
which returns all dates with Change count:
times
ChangeCount
2022-08
2
2022-11
1
2023-01
1
Second attempted query:
SELECT deployTime AS times ,
COUNT(DISTINCT changeno) AS "Change Count"
FROM deployments
WHERE hotfix != ''
GROUP BY deployTime
which returns no records if there's no record with hotfix != ''.
How we can get 0 count for every date instead of nothing?
times
HotfixCount
2022-08
0
2022-11
0
2023-01
0
Thanks
| How to return 0 for all time intervals instead of nothing when counting | The problem with your query is that you're using a WHERE clause, that removes records. What you should do instead, is apply conditional aggregation on presence/absence of hotfix values:
SELECT deployTime AS times ,
COUNT(DISTINCT changeno) FILTER(WHERE hotfix = '') AS "Change Count"
FROM deployments
GROUP BY deployTime
And change it to WHERE NOT hotfix = '' conversely to obtain zeroes.
Check the demo here.
Note: It's better to have NULL values instead of empty strings, when you need to indicate missing data.
|
76392054 | 76392081 | I have a table where each instance in the table is a sold ticket and tickets can have different ticket types. Looking something like this:
Event
Ticket Type
Event 1
a
Event 2
a
Event 1
b
Event 2
a
Event 1
a
I want it to be grouped by the event but displaying both the total tickets for that event as well as breaking down the number of each ticket type.
Event
Total Tickets
Ticket Type a
Ticket Type b
Event 1
3
2
1
Event 2
2
2
0
I have tried a few different queries but nothing that is showing me the results I'm looking for. Is this possible in one query?
| How do I break down data that I am grouping by? | Yes, it is possible by using GROUP BY to group the event data and then calculating the totals. The following query works by counting the number of tickets, then counting the number of tickets for each type "a" and "b".
SELECT
Event,
COUNT(*) AS 'Total Tickets',
COUNT(CASE WHEN TicketType = 'a' THEN 1 END) AS 'Ticket Type a',
COUNT(CASE WHEN TicketType = 'b' THEN 1 END) AS 'Ticket Type b'
FROM
Tickets
GROUP BY
Event;
|
76389631 | 76390588 | I'm trying to create a Client library that will allow users to push serialized data to a service.
so I created a class
public class Data
{
public string Prop1 {get; set;}
public SubData Prop2 {get; set;}
}
public abstract class SubData
{
public string Prop3 {get; set;}
}
I would like to allow users to extend that SubData to add custom properties. but I'm having issues in my Serialization, it doesn't serialize the properties of the extended inner object
JsonSerializer.Serialize(data)
I know that I could decorate my SubData class with JsonDerivedType attribute, but my problem is that SubData is in a package, and it doesn't know about who will extend it and with what (and it doesn't care)
I don't know if I'm clear what my problem is so here is a full test to replicate:
using System.Text.Json;
public class Data
{
public string Prop1 { get; set; }
public SubData SubData { get; set; }
}
public abstract class SubData
{
public string Prop2 { get; set; }
}
public class ExtendedSubData : SubData
{
public string Prop3 { get; set; }
}
var data = new Data
{
Prop1 = "1",
SubData = new ExtendedSubData
{
Prop2 = "2",
Prop3 = "3"
}
};
SerializData(data);
void SerializData(Data data)
{
var serialized = JsonSerializer.Serialize<object>(data);
// serialized at this point doesn't contain Prop3 of
// ExtendedSubData
}
| Polymorphic serialization of property | As an alternative to the custom JsonConverter, you can replace the SubData property with generic
public class Data<TSubData>
where TSubData : SubData
{
public string Prop1 { get; set; }
public TSubData SubData { get; set; }
}
string SerializData<TSubData>(Data<TSubData> data)
where TSubData : SubData
{
return JsonSerializer.Serialize<object>(data);
}
Test it on dotnetfiddle:
https://dotnetfiddle.net/lkOM0Z
|
76388586 | 76389618 | I have poor understanding in this question. The major step to distribute any application is the code signing, it signs application with dependant dynamic library. As I understand OS will check signed application during installation and subsequent calls. If application or dynamic library was changed then os rejects the launch. Many dylib files are supplied to conform LGPL license and therefore dylib files can be potentially substituted by user later. But it will break the launch of application (because signed dylib was replaced). Are my assumptions correct? Maybe there is comprehensive book/guide which covers this topic? I found apple documentation pretty bad
| How to sign dylib file which can be replaced? | That's not how this works.
The signature of the library isn't going to matter if the entire library will be replaced. What matters is the signature of the main executable of the process, specifically whether it enforces library validation. But even if you disable library validation, that is likely only going to work for dlopen() scenarios.
The problem is that app bundles are signed as a whole. Even non-executable resource files within the bundle are hashed, and then this list of hashes is hashed again and stored in the code signature of the main binary. While it looks syntactically possible to exclude files from this, I don't know whether Gatekeeper would accept this, and I don't know whether it would work for dylibs in particular.
But even if you found a combination that works, it will likely only be a matter of time before Apple breaks it, because the whole point of codesigning is that only pre-approved binaries are allowed to be executed.
The simple solution is: if you replace the library, you re-sign the entire bundle.
|
76392070 | 76392085 | So to make it very simple.
If I write print("Hello World") and run it on the RUN icon on top right corner it works perfectly fine.
And only then if I type in terminal python hello.py executes again just fine.
But if I change program to print("Hello to everybody") and I type in terminal python hello.py it executes the previous program and giving me Hello World on screen. But then again if I click on run icon and after correct execution repeat in terminal python hello.py, now it runs correctly.
Well I tried CTRL,SHift P, to change terminal and like select interpreter but couldn't figure out from which file is it trying to start program. And why is it correct after I use run icon.
| Cannot run updated file from terminal of VSC, for Python | Are you saving the file after changing it? The Run button will run the code that is currently in the editor, while running it via python in the command line will run it from the saved file.
|
76391253 | 76392089 | I have a plugin written in Vue 2 that programmatically mounts a component onto existing app instance.
export default {
install (Vue) {
const component = new (Vue.extend(Component))()
const vm = component.$mount()
document.body.appendChild(vm.$el)
}
}
Trying to migrate this to Vue 3 and this is the closest I've got.
export default {
install (app) {
const component = defineComponent(Component)
const container = document.createElement('div')
document.body.appendChild(container)
const vm = createApp(component).mount(container)
}
}
The problem is that createApp creates a new app instance, while Vue.extend created only a subclass. I need to have all globally available components in the main app, available in plugin's component as well. Creating a new app instance prevents this.
Component needs to be programatically mounted. Manually inserting it into template is not an option.
Please help.
| Vue 3: Mount component onto existing app instance | Interesting problem! There is a discussion on the Vue GitHub and they created a module that you can use: mount-vue-component. I found it very helpful to look at the module's code, it does exactly what you want to do.
To instantiate the component, you have to create a VNode, which you can do with createVNode() (which is the same as h()). To get access to the app's context, including global components, you have to set the appContext property. Finally, the render() function mounts the component instance to an HTML Element.
So for you example, that gives you:
export default {
install (app) {
const container = document.createElement('div')
document.body.appendChild(container)
const vNode = createVNode(component)
vNode.appContext = app._context
render(vNode, container)
}
}
Here it is in a snippet:
const { createApp, createVNode, ref, render } = Vue;
const App = {
data() {
return {
msg: ref('Hello from app!')
}
}
}
const app = createApp(App)
app.component('inner-component', {
template: '<div style="color: green;">This is a global component in app</div>'
})
app.mount('#app')
const component = {
props: ['color'],
template: '<span :style="{color: color}">Hello from mounted component!</span><inner-component></inner-component>'
}
const el = document.getElementById('leComponent')
const vNode = createVNode(component, {color: 'blue'}, [])
vNode.appContext = app._context
render(vNode, el)
div{
margin: 4px;
padding: 4px;
border: 1px solid #333;
}
<div id="app">
{{msg}}
<inner-component></inner-component>
</div>
<div id="leComponent"></div>
<script src="https://unpkg.com/vue@3/dist/vue.global.js"></script>
|
76390122 | 76390602 | spsolve is then - sometimes - unable to find a solution.
Our teacher gave us test cases that we have to satisfy however I passed all of them but seems to fail the hidden test cases.
My code checks for the following: If they share a node and only those two resistors are connected then print SERIES else NEITHER. If their previous resistor is in series, then its is in SERIES (see test case no 2). If the resistor's ends are connected to the same nodes then print Parallel.
Can you suggest some inputs or any possible scenarios that a code wouldn't be able to answer correctly? or maybe a suggestion on what type of algorithm I should use for this problem
As I am performing nodal analysis, a singular matrix is expected since the position of the ground potential is generally not well-defined. However, before the update, a solution was found in 99% of the cases, maybe more. Now, I'm at 10% for large systems at best. I have not changed the algorithm and for a few tests, I have used identical code as before. Here is how I set up my calculation:
I generate a random three-dimensional network of resistors (I realize that I could accidentally create unsolvable networks but the percentages above should not change that drastically). The only SciPy/NumPy functions used here is np.random
I create a sparse lil-matrix which I fill with conductance values extracted from my resistor network. I also create a solution vector which is not sparse.
I convert the conductance matrix to csr-format and use the spsolve method. This is where my code lately fails.
| Matrix Circuit Analysis using Algorithms | Electron travel between Vdd and ground. The Dijkstra algorithm ( google it ) finds these routes. If two resisters are on the same route, they are in series.
Setup
LOOP over every pair of resisters
IF the ends of the two resistor are connected to the same nodes
- mark as parallel
Create adjacency list for resisters, combining parallels together into one edge.
To check if R1 and R2 are in series:
IF R1 and R2 are marked parallel return PARALLEL
Use Dijkstra to find all routes from Vdd to R1
IF no route return NO
Use Dijkstra to find all route from R1 to GND
IF no route return NO
IF R2 itself, or as part of a parallel, on any routes found return SERIES
Note that this will return YES for R3 and R4 in sample input 1. I do not understand why R3 and R4 are not in series - electricity will flow through R3 and then R4 to reach ground.
Obviously, I do not understand your definition of "in series". Please provide this definition. In particular:
R1 Vdd a
R2 a b
R3 a b
R4 b GND
Are R1 and R4 in series or not?
|
76390958 | 76392099 | Does a StatefulSet pod, when deleted or failed, get redeployed on the same worker node, or is it deployed on another available worker node?
Seeking clarification on the default behavior of StatefulSet pods in Kubernetes when they are deleted or failed whether they are rescheduled on the same worker node or on different worker nodes.
| Does a StatefulSet pod, when deleted or failed, get redeployed on the same worker node, or is it deployed on another available worker node? | By default, k8s will schedule a new pod on any 1 of the nodes which has sufficient CPU and memory resources available. If you specified any special conditions like Pod affinity, Pod affinity or Node affinity, k8s will follow them accordingly. Please check out this official document
|
76387953 | 76389700 | I am plotting 3D data using matplotlib==3.3.4:
fig = plt.figure(figsize=(15, 10))
ax = fig.gca(projection="3d")
ax.view_init(30, 0)
# facecolors is a 3D volume with some processing
ax.voxels(
x, y, z, facecolors[:, :, :, -1] != 0, facecolors=facecolors, shade=False
)
fig.canvas.draw()
image_flat = np.frombuffer(fig.canvas.tostring_rgb(), dtype="uint8")
image_shape = (*fig.canvas.get_width_height(), 3) # (1500, 1000, 3)
ax.imshow(image_flat.reshape(*image_shape))
plt.show()
(I am making some improvements on BraTS20_3dUnet_3dAutoEncoder with inspiration from Figure to image as a numpy array).
However, when I actually plot the image, there are two copies:
What am I doing wrong? I can't figure out where the second image is coming from.
| Image duplicated when using fig.canvas.tostring_rgb() | The NumPy array ordering is (rows, cols, ch). The code image_shape = (*fig.canvas.get_width_height(), 3) switches rows and cols, which leads to the output image being incorrectly shaped, which looks like two copies.
Replace image_shape = (*fig.canvas.get_width_height(), 3) with:
image_shape = (*fig.canvas.get_width_height()[::-1], 3)
For avoiding confusion, we better use two lines of code:
cols, rows = fig.canvas.get_width_height()
image_shape = (rows, cols, 3)
Reproducible example (using data from here):
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure(figsize=(15, 10))
ax = fig.gca(projection="3d")
ax.view_init(30, 0)
# https://stackoverflow.com/questions/76387953/image-duplicated-when-using-matplotlib-fig-canvas-tostring-rgb
# prepare some coordinates
x, y, z = np.indices((8, 8, 8))
# draw cuboids in the top left and bottom right corners, and a link between
# them
cube1 = (x < 3) & (y < 3) & (z < 3)
cube2 = (x >= 5) & (y >= 5) & (z >= 5)
link = abs(x - y) + abs(y - z) + abs(z - x) <= 2
# combine the objects into a single boolean array
voxelarray = cube1 | cube2 | link
# set the colors of each object
colors = np.empty(voxelarray.shape, dtype=object)
colors[link] = 'red'
colors[cube1] = 'blue'
colors[cube2] = 'green'
# and plot everything
#ax = plt.figure().add_subplot(projection='3d')
ax.voxels(voxelarray, facecolors=colors, edgecolor='k')
fig.canvas.draw()
image_flat = np.frombuffer(fig.canvas.tostring_rgb(), dtype="uint8")
#image_shape = (*fig.canvas.get_width_height(), 3) # (1500, 1000, 3)
#image_shape = (*fig.canvas.get_width_height()[::-1], 3) # It should be (1000, 1500, 3) instead of (1500, 1000, 3)
cols, rows = fig.canvas.get_width_height()
image_shape = (rows, cols, 3)
img = image_flat.reshape(*image_shape)
plt.figure()
plt.imshow(img)
plt.show()
Output image before fixing the code:
Output image after fixing the code:
|
76389973 | 76390614 | Are there any elvis like operator in Ocaml ?
Any sort of optional chaining that return the right value when the left one is empty, a default value operator, like the |> operator with opposite effect.
If not what are the good practices ?
As an example of a use case :
let get_val val_1 val_2 =
if (val_1) then (val_1) else (val_2);;
Are there any syntactic sugar ?
| Are there any default value tricks / Elvis Operator in Ocaml? | First, in OCaml if ... then ... else ... is an expression and thus there is no needs to have a distinct ternary operator.
Second, OCaml has optional arguments.
If I guess correctly the supposed semantics of your get_val function, it can be written:
let default = []
let get_val ?(val_1=default) () = val_1
which gives you [] as the default value when the named argument val_1 is absent
let () =
assert ([] = get_val ())
or the val_1 argument otherwise:
let () =
assert ([2;3;4] = get_val ~val_1:[2;3;4] ())
|
76388435 | 76389815 | i am using axios on react to send http request to laravel backend. but cors prevent laravel to answer my request. i catch below error:
p://localhost/todo/laravel/public/api/test' from origin 'http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
my react code:
axios.get('http://localhost/todo/laravel/public/api/test', { withCredentials: true })
.then(function (response) {
// handle success
console.log(response);
})
.catch(function (error) {
// handle error
console.log(error);
})
.then(function () {
// always executed
});
my laravel code:
public function all(): bool|string
{
$todos = Todo::all();
return json_encode($todos);
}
my cors config:
<?php
return [
/*
|--------------------------------------------------------------------------
| Cross-Origin Resource Sharing (CORS) Configuration
|--------------------------------------------------------------------------
|
| Here you may configure your settings for cross-origin resource sharing
| or "CORS". This determines what cross-origin operations may execute
| in web browsers. You are free to adjust these settings as needed.
|
| To learn more: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
|
*/
'paths' => ['*'],
'allowed_methods' => ['*'],
'allowed_origins' => ['*'],
'allowed_origins_patterns' => [],
'allowed_headers' => ['*'],
'exposed_headers' => false,
'max_age' => false,
'supports_credentials' => false,
];
what is problem?
| send a get http request by axios on react to laravel 10 backend on localhost | try
'paths' => ['todo/*'],
I' not sure, that ['*'] is allowed
|
76387968 | 76389842 | Sometimes, in a Data JPA environment, when three queries occur in one method, one query does not work normally in DB.
@Transactional
public void method1(Log log){
Long userId = log.getUserId();
User user = userRepository.findById(userId).orElseThrow(RuntimeException::new)
BigDecimal point = BigDecimal.valueOf(100L);
user.minusPoint(point);
log.minusPoint(point);
logRepository.save(new Log(point));
}
It works normally for one case.
However, I ran it on several occasions, and one did not work properly.
ex) logList.forEach(log-> a.method1(log));
did not work "log.minusPoint(point);" in all cases.
The data in the DB has not changed about log.
But the new log data was inserted and the user data has been changed.
It was confirmed that the update, insert query was created successfully in the test environment.
| Sometimes, in a Data JPA environment, when three queries occur in one method, one query does not work normally in DB | You can enable SQL logging and check what data is actually passed to the database.
One of the reason can be: @Transactional annotation doesn't work. You need to get a service from the Spring Context to allow Spring to wrap method1() with a proxy method to do dirty checking and save data.
@Transactional annotation is a bit messy. It is primary not about transactions, but about dirty checking and Hibernate session control.
When you do
logRepository.save(new Log(point));
data is saved because logRepository.save() has it is own @Transactional annotation and does explicit call to save the data.
log.minusPoint(point) could work, if @Transactional works correctly. A Spring proxy could open Hibernate session (Persistent Context) before method1() call and close it in the end. During closing session Hibernate does dirty checking — checks, if any of objects in session were changed and saves such objects.
Another reason can be that you load log in a method without @Transactional or in a method with own @Transactional. Different @Transactional means different Persistent Context.
One exception — when @Transactional of method1() is wrapped with another @Transactional that is used for a method that calls method1().
It is bad practice to solve all issues with putting @Transactional everywhere. You can just do
logRepository.save(log)
|
76392047 | 76392103 | I am trying to write a mongo query to help produce a report from the data.
The documents look something like this:
[{
"DeviceType" : "A",
"DeviceStatus" : "On"
},
{
"Device Type" : "B",
"Device Status" : "On"
},
{
"DeviceType" : "A",
"DeviceStatus" : "Off"
},
{
"DeviceType" : "A",
"DeviceStatus" : "On"
}]
The DeviceType field can take any string value and the DeviceStatus field can be any one of On, Off or InRepair. I want to write a mongo query that displays DeviceStatus count for each one of the devices. The result should be something like this:
[{
"DeviceType" : "A",
"DeviceStatusOnCount" : "15",
"DeviceStatusOffCount" : "13",
"DeviceStatusInRepairCount" : "12",
},
{
"DeviceType" : "B",
"DeviceStatusOnCount" : "6",
"DeviceStatusOffCount" : "14",
"DeviceStatusInRepairCount" : "2",
}]
How can I achieve this in mongo?
My current query can only group based on the DeviceType:
db.Collection_Name.aggregate([
{ $group: { _id: "$DeviceType", count: { $sum: 1 } } }
])
| How can I count over multiple fields in mongo? | You can use $cond + $eq to return either 1 or 0 to $sum operator:
db.collection.aggregate([
{
$group: {
_id: "$DeviceType",
DeviceStatusOnCount: { $sum: { $cond: [ { $eq: [ "$DeviceStatus", "On" ] }, 1, 0 ] } },
DeviceStatusOffCount: { $sum: { $cond: [ { $eq: [ "$DeviceStatus", "Off" ] }, 1, 0 ] } },
DeviceStatusInRepairCount: { $sum: { $cond: [ { $eq: [ "$DeviceStatus", "InRepair" ] }, 1, 0 ] } }
}
},
{$addFields: {DeviceType: '$_id'}}, {$project: {_id:0}}
])
Test: mongoplayground
|
76390575 | 76390615 | (MacOS Monterey):
I have external hard disks with files written under Windows, using Cygwin rsync.
The files are perfectly readable under MacOS, but when I want to delete/overwrite them, quite some appear "locked" (operation not permitted when doing a rm) on MacOS. Using the Finder, I can remove the lock by getting the context menu on the file, choose "GetInfo", and unchecked the Locked property.
But there are too many of them to do it by hand. I would like to run recursively over the file tree and uncheck the "locked" attribute programmatically. Of course the problem is not the traversal of the directory tree (can done by find for instance), but the change of the attribute from the command line.
chmod does not help here (as the files are 0777). Any other command I can use?
| How to remove the "locked" flag (programmatically) | I just found the command to do so:
chflags nouchg FILENAME
The command is described here
|
76388274 | 76389975 | I have a scenario where my pipeline should update the app registration with an additional redirectUrl.
I have managed to extract the current web.redirectUris with the following:
existing_urls=$(az ad app show --id '<client-id>' --query "[web.redirectUris]" --output tsv)
I would like to achieve something like this
existing_urls=$(az ad app show --id '<client-id>' --query "[web.redirectUris]" --output tsv)
az ad app update --id '<client-id>' --web-redirect-uris "$existing_urls https://hostname.com/newCallback"
I have tried updating the web.redirectUris in two ways and both of them have failed when I pass multiple redirect URIs.
Attempt 1
az ad app update --id '<client-id>' --web-redirect-uris "https://hostname.com/callbackx https://hostname.com/callbacky"
One or more properties contains invalid values.
However when having only one uri this worked fine
az ad app update --id '<client-id>' --web-redirect-uris "https://hostname.com/callbackx"
Attempt 2
This one fails regardless of number of redirectUris that are passed
az ad app update --id '<client-id>' --set "web.redirectUris=['https://hostname.com/callbackx', 'https://hostname.com/callbacky']"
Couldn't find 'web' in ''. Available options: []
| Updating web redirect uri of Azure AD app registration | Tried as shown :But got the same error:
az ad app show --id 1e7bxxx7830
existing_urls=$(az ad app show --id 1e7b8fxxxx830 --query "[web.redirectUris]")
az ad app update --id 1e7xxx0a7830 --web-redirect-uris "$existing_urls https://hostname.com/newCallback"
$updated_urls="$existing_urls https://hostname.com/newCallback"
az ad app update --id 1e7b8xxx0a7830 --set "web.redirectUris='$updated_urls'"
az ad app update --id 1e7b8fxxxd0a7830 --set "web.redirectUris='$updated_urls'"
Error:
Couldn't find 'web' in ''. Available options: []
Following command worked foe me in azure cli in updating multiple Redirect Urls:
az ad app update --id '1e7bxxxa7830' --web-redirect-uris "https://hostname.com/callback" "https://jwt.ms" "https://myexampleapp.com"
here --id is clientId .
So give the command with required urls as
az ad app update --id '1e7bxxxa7830' --web-redirect-uris "<url1>" "<url2>" "<url3>"
upon az ad app show --id 1e7b8xxxx830
|
76390603 | 76390619 | I'm trying to find the closest value in a reference data frame but it is not outputting the correct row.
I am using the below data frame which is then used to find the relevant row corresponding to the closest value in column 'P' to a defined variable. For example if p = 0.22222 then the code should output row 2.
DF:
P n1 n2 n3 n4 n5 n6 n7 n8 n9
0 NaN 0.0 0.2000 0.4000 0.6000 0.8000 1.0000 1.2000 1.4000 1.6000
1 0.0 1.0 0.8039 0.6286 0.4855 0.3753 0.2929 0.2318 0.1863 0.1520
2 0.2 1.0 0.7983 0.6201 0.4771 0.3683 0.2876 0.2279 0.1835 0.1500
3 0.4 1.0 0.7789 0.5924 0.4508 0.3473 0.2720 0.2167 0.1754 0.1442
4 0.6 1.0 0.7349 0.5377 0.4043 0.3124 0.2470 0.1989 0.1628 0.1351
5 0.8 1.0 0.6301 0.4433 0.3368 0.2658 0.2147 0.1762 0.1465 0.1234
6 1.0 0.5 0.3828 0.3105 0.2559 0.2130 0.1787 0.1510 0.1286 0.1102
7 1.2 0.0 0.1544 0.1871 0.1795 0.1621 0.1433 0.1257 0.1103 0.0965
8 1.4 0.0 0.0717 0.1101 0.1216 0.1197 0.1120 0.1024 0.0925 0.0831
9 1.6 0.0 0.0400 0.0682 0.0829 0.0876 0.0865 0.0824 0.0765 0.0707
10 1.8 0.0 0.0249 0.0449 0.0580 0.0647 0.0668 0.0659 0.0633 0.0597
11 2.0 0.0 0.0168 0.0312 0.0418 0.0485 0.0519 0.0528 0.0520 0.0502
12 3.0 0.0 0.0042 0.0082 0.0118 0.0149 0.0174 0.0193 0.0207 0.0216
The function I am using however outputs the incorrect value:
p = 0.2020202
closest_p = df.iloc[(df['P']-p).abs().argsort()[:1]]
Expected output:
P n1 n2 n3 n4 n5 n6 n7 n8 n9
2 0.2 1.0 0.7983 0.6201 0.4771 0.3683 0.2876 0.2279 0.1835 0.1500
However it is only outputting the last row -
P n1 n2 n3 n4 n5 n6 n7 n8 n9
12 3.0 0.0 0.0042 0.0082 0.0118 0.0149 0.0174 0.0193 0.0207 0.0216
Where am i going wrong????
| I'm trying to find the closest value in a reference data frame but it is not outputting the correct row. Where am I going wrong? | You need to use idxmin:
closest_p = (df['P']-p).abs().idxmin()
Output: 2
For the row: df.loc[(df['P']-p).abs().idxmin()]
A fix of your approach would have been to use sort_values (but it's less efficient):
closest_p = df.loc[(df['P']-p).abs().sort_values().index[0]]
|
76391811 | 76392128 | In this code
#include <type_traits>
#include <iostream>
template <class T>
void func(const T& a)
{
if constexpr(std::is_same_v<T,int>)
{
static_cast<void>(a);
}
else if constexpr(std::is_same_v<T,double>)
{
// oops forgot to use it here
}
else
{
}
}
int main() {
func(4);
func("this");
}
why doesn't the compiler warn about unused variable in the else-s()? (with -Wall)
My understanding is that logically the instantiations of the method are completely different. If the argument is not being used in one of the instantiations of the template, isn't it an unused variable?
Or does the language/compiler not interpret it like that.
| Unused parameter warning in templated function with if constexpr | We can say that the parameter a is not fully unused, it's conditionally unused.
The C++ standard requires to generate diagnostic messages for ill-formed programs. Your program is not ill-formed. The C++ standard does not require compilers to generate other warnings at all. So it's up to the compiler vendor to decide how to implement that.
While I personally would like to be warned about conditionally unused variables (your example is a good demo why), no compiler vendor has implemented that. Why? They test new version of compilers on large code bases to see whether new C++ features will cause breaking changes. Likely, during those tests, a lot of warnings were generated for code like yours, but the code was considered totally fine by C++ experts. So the warnings were adapted to not occur in such situations. That's to ensure that code with 0 warnings is possible.
|
76391083 | 76392133 | I have a library, namely pfapack (1), that I want to use in rust. So the initial code is written in Fortran, but a C interface exists and works well. I want to make a Rust crate (2) that ships this code so I can use it in any other Rust project (3). Doing so, (3) gives an undefined symbol error.
I have written a build script in (2) that calls (1)'s build method. I then use cc to combine the object files and link the needed libraries. I then used bindgen to generate bindings for the functions I need. I would expect that (3) would be able to see the object that were compiled at (2) build time, but it can't.
The exact step taken are:
New crate with (1)'s source code.
(2) build.rs
use std::process::Command;
fn main() {
// This makefile call a custom root makefile that only calls the two
// makefiles in c_interface/ and fortran/
Command::new("make").output()
.expect("Failed to make");
println!("cargo:rustc-link-search=c_interface");
println!("cargo:rustc-link-search=fortran");
println!("cargo:rustc-link-lib=static=pfapack");
println!("cargo:rustc-link-lib=static=cpfapack");
println!("cargo:rustc-link-lib=gfortran");
}
(3) build.rs
fn main() {
println!("cargo:rustc-link-lib=lapack");
println!("cargo:rustc-link-lib=blas");
}
Original example compilation to use pfapack
gcc -O3 -I c_interface/ foo.c -o foo.out c_interface/libcpfapack.a fortran/libpfapack.a -lm -lblas -llapack -lgfortran
The command used to generate bindings came from https://github.com/blas-lapack-rs/lapack-sys/blob/master/bin/generate.sh as it uses the same naming convention:
generate.sh
#!/bin/bash
set -eux
bindgen --allowlist-function='^.*_$' --use-core pfapack.h \
| sed -e 's/::std::os::raw:://g' \
| sed -e '/__darwin_size_t/d' \
> pfapack.rs
rustfmt pfapack.rs
Compiling (3) gives this error https://pastebin.com/4FubsYx9
Ignoring a big blob of flags, the error:
= note: /usr/bin/ld: /home/dumbo/Documents/test_pfapack/target/debug/deps/test_pfapack-d08bc25fe63b6ef8.ka29pyd46xgunxk.rcgu.o: in function `pfapack_sys::dskpfa':
/home/dumbo/Documents/pfapack-sys/src/pfapack-bind.rs:249: undefined reference to `dskpfa'
collect2: error: ld returned 1 exit status
= note: some `extern` functions couldn't be found; some native libraries may need to be installed or have their path specified
= note: use the `-l` flag to specify native libraries to link
= note: use the `cargo:rustc-link-lib` directive to specify the native libraries to link with Cargo (see https://doc.rust-lang.org/cargo/reference/build-scripts.html#cargorustc-link-libkindname)
error: could not compile `test_pfapack` due to previous error
(2) Cargo.toml
[package]
name = "pfapack-sys"
version = "0.1.0"
edition = "2021"
links = "pfapack"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
libc = "0.2"
[dependencies.num-complex]
version = "0.4"
default-features = false
[lib]
name = "pfapack_sys"
[build-dependencies]
cc = "1.0.79"
| Trouble with undefined symbols in Rust's ffi when using a C/Fortran library | The solution was quite simple and right under my nose. Note that the symbol not found does not have an ending underscore, as pfapack functions do after mangling. This was because I tried to do the bindings to the C interface and the pretty Rust functions in the same crate. Looking at the Rust book, https://doc.rust-lang.org/cargo/reference/build-scripts.html#-sys-packages, the convention is to have a *-sys crate that just does the binding, and have another crate named * that does the pretty functions. Thus, I needed to update the build script.
Here is the updated (2) build.rs
// build script
use std::process::Command;
fn main() {
Command::new("make").output()
.expect("Failed to make");
println!("cargo:rustc-link-search=c_interface");
println!("cargo:rustc-link-search=fortran");
println!("cargo:rustc-link-lib=static=pfapack");
println!("cargo:rustc-link-lib=static=cpfapack");
println!("cargo:rustc-link-lib=gfortran");
println!("cargo:rustc-link-lib=lapack");
println!("cargo:rustc-link-lib=blas");
}
I renamed this package pfapack-sys. Created a new crate named pfapack(4) that depends on pfapack-sys. Now, update Cargo.toml(3) to depend on (4). Now works out of the box.
|
76390438 | 76390623 | I wonder why Apache Parquet writes metadata at the end of the file instead of the beginning?
In the official documentation of Apache Parquet, I found that Metadata is written after the data to allow for single pass writing.. Is the metadata written at the end to ensure the integrity of the file? I don't understand what this sentence really means, if someone could explain it to me, I'd appreciate it.
| Why metadata is written at the end of the file in Apache Parquet? | I think the main reason is so you can write bigger than memory data to the same file.
The meta data contains information about the schema of the data (type of the columns) and its shape (number of row groups, size of each row groups).
So in order to generate the metadata you need to know what the data is made of. This can be a problem if your data doesn't fit in memory.
In this case, you should still be able to split your data in manageable row groups (that fit in memory) and append them to the file one by one, keeping track of the meta data, and appending the meta data at the end.
import pyarrow as pa
import pyarrow.parquet as pq
schema = pa.schema([pa.field("col1", pa.int32())])
with pq.ParquetWriter("table.parquet", schema=schema) as file:
for i in range(0, 10):
file.write(pa.table({"col1": [i] * 10}, schema=schema))
If you're looking for an alternative where the data can be streamed, with the meta data being written at the beginning, you should look at the arrow IPC format.
|
76387965 | 76390012 | We are trying to integrate NewRelic with our Fast API Service. It works fine when we are not providing numbers of worker in uvicorn config
if __name__ == "__main__":
# newrelic.agent.register_application()
import newrelic.agent
newrelic.agent.initialize()
print("api key ", os.environ.get("NEW_RELIC_LICENSE_KEY", 1))
print("app name ", os.environ.get("NEW_RELIC_APP_NAME", 1))
# printing to make sure licence key and app name are defined in env variables.
uvicorn.run(app, host='0.0.0.0', port=5600)
But when we are defining numbers of workers in the uvicorn config, NewRelic does not show any data in dashboard.
if __name__ == "__main__":
# newrelic.agent.register_application()
import newrelic.agent
newrelic.agent.initialize()
print("api key ", os.environ.get("NEW_RELIC_LICENSE_KEY", 1))
print("app name ", os.environ.get("NEW_RELIC_APP_NAME", 1))
# printing to make sure licence key and app name are defined in env variables.
uvicorn.run("new_relic_test:app", host='0.0.0.0', port=5600, workers=15)
Is that due to multiple server process being created by uvicorn workers?
I tried removing workers and it worked fine. But with numbers of workers it does not work
| NewRelic Not working with multiple workers Fast API Uvicorn | The reason is the following: when you run uvicorn.run with only one process, it will start the server as a normal Python function. But, when you run with workers=n, uvicorn will start n new processes, and the original Python process will remain as an orchestrator between these. In these new processes, it will not start your code with a different entrypoint, meaning the if __name__ == "__main__" will not run (this is also why you must specify your app as a string instead of the Python instance when running more than one worker, since uvicorn needs to know where to import your app from). So in your case, newrelic.agent.initialize() is not run.
I would suggest moving everything except uvicorn.run out of the if __name__ == "__main__" block and put it in the same file as where you define your app.
|
76392029 | 76392136 | I am trying to open a C-file in Python by using the "subprocess"-module.
I cannot execute the program without triggering a 'FileNotFoundError', and if I put in the full path of the file, I get an 'Exec format error'.
I don't necessarily need to use the 'subprocess'-module, I just don't know of any other method.
The Python-Skript:
#!/usr/bin/env python
import subprocess
shellcode_file = "shellcode.bin"
try:
with open(shellcode_file, "rb") as f:
shellcode = f.read();
subprocess.call(["script.c", shellcode])
except FileNotFoundError as e:
print(e, "not found.")
I am pretty sure that I am opening the File wrong but I couldn't find a way to fix it.
| How can I open a C-file in Python using 'subprocess' without triggering an Exception? | You are passing the C source file itself, not the compiled executable (*.exe).
For further info if needed, C is not an interpreted/scripted language, so it cannot be run through its code file like other scripting languages (e.g. Python, Batch). It's a compiled language, meaning that it has to be run through a compiler to generate a separate file (almost always a .exe file) that you can then run your program with.
Compile your C program first and pass the executable file.
subprocess.call(["script.exe", shellcode])
|
76390211 | 76390625 | I am trying to scrape the medium website. Here is my code.
import requests
from bs4 import BeautifulSoup as bs
class Publication:
def __init__(self, publication):
self.publication = publication
self.headers = {'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36'}# mimics a browser's request
def get_articles(self):
"Get the articles of the user/publication which was given as input"
publication = self.publication
r = requests.get(f"https://{publication}.com/", headers=self.headers)
soup = bs(r.text, 'lxml')
elements = soup.find_all('h2')
for x in elements:
print(x.text)
publication = Publication('towardsdatascience')
publication.get_articles()
It is working somewhat good but it is not scraping all the titles. It is only getting the some of the articles from the top of the page. I want it to get all the article names from the page. It also getting the side bar stuff like who to follow and all. I dont want that. How do I do that?
Here is the output of my code:
How to Rewrite and Optimize Your SQL Queries to Pandas in 5 Simple Examples
Storytelling with Charts
Simplify Your Data Preparation with These Four Lesser-Known Scikit-Learn Classes
Non-Parametric Tests for Beginners (Part 1: Rank and Sign Tests)
BigQuery Best Practices: Unleash the Full Potential of Your Data Warehouse
How to Test Your Python Code with Pytest
7 Signs You’ve Become an Advanced Sklearn User Without Even Realizing It
How Data Scientists Save Time
MLOps: What is Operational Tempo?
Finding Your Dream Master’s Program in AI
Editors
TDS Editors
Ben Huberman
Caitlin Kindig
Sign up for The Variable
| WebScraping - BeautifulSoup Python | As Barry the Platipus mentions in a comment, the content you want is loaded via Javascript. A complicating factor is that this content is only loaded when you scroll the page, so even a naive Selenium-based solution like this will still return only the same set of results as your existing code:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
options = Options()
options.add_argument("--headless")
driver = webdriver.Chrome(options=options)
class Publication:
def __init__(self, publication):
self.publication = publication
def get_articles(self):
"Get the articles of the user/publication which was given as input"
publication = self.publication
driver.get(f"https://{publication}.com/")
elements = driver.find_elements(By.CSS_SELECTOR, "h2")
for x in elements:
print(x.text)
publication = Publication("towardsdatascience")
publication.get_articles()
To get more than the initial set of articles, we need to scroll the page. For example, if we add a simple loop to scroll the page a few times before querying for h2 elements, like this:
def get_articles(self):
"Get the articles of the user/publication which was given as input"
publication = self.publication
driver.get(f"https://{publication}.com/")
for x in range(3):
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
# The sleep here is to give the page time to respond.
time.sleep(0.2)
elements = driver.find_elements(By.CSS_SELECTOR, "h2")
for x in elements:
print(x.text)
Then the output of the code is:
Large Language Models in Molecular Biology
How to Rewrite and Optimize Your SQL Queries to Pandas in 5 Simple Examples
Storytelling with Charts
Simplify Your Data Preparation with These Four Lesser-Known Scikit-Learn Classes
Non-Parametric Tests for Beginners (Part 1: Rank and Sign Tests)
BigQuery Best Practices: Unleash the Full Potential of Your Data Warehouse
How to Test Your Python Code with Pytest
7 Signs You’ve Become an Advanced Sklearn User Without Even Realizing It
How Data Scientists Save Time
MLOps: What is Operational Tempo?
Finding Your Dream Master’s Program in AI
Temporary Variables in Python: Readability versus Performance
Naive Bayes Classification
Predicting the Functionality of Water Pumps with XGBoost
Detection of Credit Card Fraud with an Autoencoder
4 Reasons Why I Won’t Sign the “Existential Risk” New Statement
The Data-centric AI Concepts in Segment Anything
3D Deep Learning Python Tutorial: PointNet Data Preparation
Why Trust and Safety in Enterprise AI Is (Relatively) Easy
The Principles of a Modern Computer Scientist
That page appears to be an "infinite scroll" style of page, so you probably want to set a limit on how many times you scroll to find new content.
|
76390552 | 76390629 | I have a simple Django app to show the output of an SSH remote command.
views.py:
from django.http import HttpResponse
from subprocess import *
def index(request):
with open('/home/python/django/cron/sites.txt', 'r') as file:
for site in file:
# out = getoutput(f"ssh -o StrictHostKeyChecking=accept-new -p1994 root@{site} crontab -l")
out = run(["ssh", "-o StrictHostKeyChecking=accept-new", "-p1994", f"root@{site}".strip(), "crontab", "-l"])
return HttpResponse(out)
urls.py:
from django.contrib import admin
from django.urls import path
# imported views
from cron import views
urlpatterns = [
path('admin/', admin.site.urls),
# configured the url
path('',views.index, name="homepage")
]
sites.txt:
1.1.1.1
2.2.2.2
3.3.3.3
The issue is when I run localhost:5000, I see this:
CompletedProcess(args=['ssh', '-o StrictHostKeyChecking=accept-new', '-p1994', 'root@3.3.3.3', 'crontab', '-l'], returncode=0)
While I should see something like this:
* * * * * ls
* * * * * date
* * * * * pwd
I tried with both run and getoutput, but they either don't connect or the output is shown in terminal only.
How can I run this and show the output in the webpage?
| Python show output of remote SSH command in web page in Django | You're returning the CompletedProcess object instead of its output.
Try updating the run command to capture output by adding the optional parameter
out = run([...], capture_output=True)
AND changing the output to show the stdout instead of the object
return HttpResponse(out.stdout)
|
76392117 | 76392146 | sorry if this is a simplistic question or demonstrates some misunderstanding of HTML, this is my first time using it. If there's any missing info or stuff I should have included in the question, I'm happy to provide it.
I have a python program, app.py, and it calls and renders an HTML page. That HTML page contains a few inputs and I understand how to place a given string as the default value for the input with the Value attribute.
This is the html:
<form action="/new" method="post">
<label for="qty_wheels">Number of wheels:</label>
<input type="text" name="qty_wheels" value = "Qty_Wheels" />
<label for="flag_color">Colour of flag:</label>
<input type="text" name="flag_color" />
<input type="submit" class="button">
</form>
Where "Qty_Wheels" is written though, I want to place a string I've passed in from the python end.
This is the python call for this html page:
return render_template("buggy-form.html", Qty_Wheels = given_qty)
I've passed in the Qty_Wheels string here, and I want to print the value of that in the input. Is this possible and how can I do it? I've tried a lot of different syntax variants, but I can't find how I get it to interpret it as a string and print the content.
| Can I pass a variable string into the "Value" attribute for a HTML input tag? | Flask's templates use the {{ val }} syntax:
Put the double curly brackets around Qty_Wheels:
<form action="/new" method="post">
<label for="qty_wheels">Number of wheels:</label>
<input type="text" name="qty_wheels" value = "{{ Qty_Wheels }}" />
<label for="flag_color">Colour of flag:</label>
<input type="text" name="flag_color" />
<input type="submit" class="button">
</form>
For more on Flask templates see: https://flask.palletsprojects.com/en/2.3.x/tutorial/templates/
|
76388849 | 76390020 | I am working on a react project with the tailwind. I checked the inspection of Chrome and saw the same tailwinds variables multiple times. I thought maybe something is not working properly in our project and checked Shopify and it was the same, I wonder why it is working in this way?
Screenshots are taken from first page of Shopify
| Why does Tailwind declare CSS variables multiple times? | There are Tailwind CSS default variables defined for ::backdrop in a separate rule from the *, ::before, ::after rule for two reasons:
The *, ::before, ::after selector does not cover ::backdrop. As per tailwindlabs/tailwindcss#8526:
This PR adds the ::backdrop pseudo-element to our universal default rules, which fixes an issue where utilities like backdrop:backdrop-blur would not work because the variables that rule depended on were not defined.
As for why it is in a separate rule, it could be due to browser support. According to MDN, the last major web browser to support ::backdrop was Safari 15.4 which was released on March 14th 2022. The aforementioned pull request tailwindlabs/tailwindcss#8526 was merged on June 6th 2022 and released with v3.1.0 on June 8th 2022.
This means at that time, only very recent Safari browsers would have support for the ::backdrop element. If ::backdrop would be with the *, ::before, ::after rule selector as *, ::before, ::after, ::backdrop, this would break sites on older Safari browsers, since the *, ::before, ::after, ::backdrop rule would not apply since one of the components were not supported. This could be a major regression so they separated out the ::backdrop selector into its own rule in pull request #8567 that was released in v3.1.1.
|
76389558 | 76390634 | I've started to play with middlewares, it's great !
Here's an example of how I can inject a middleware when calling endpoints /api/playlists or /api/playlists/:id (I edited the file src/api/playlist/routes/playlist.js).
module.exports = createCoreRouter('api::playlist.playlist', {
config: {
find: {
middlewares: ['api::playlist.playlist.find']
},
findOne: {
middlewares: ['api::playlist.playlist.find-one']
},
},
})
Of course, I also created my middlewares in src/api/playlist/middlewares/find.js and src/api/playlist/middlewares/find-one.js)
But know, I want to add another middleware to update the responses returned by the API when calling /api/users or /api/users/:id.
Since there is no directory src/api/user in the filetree, how should I register a middleware for this ?
Thanks
| In Strapi V4, How should I register a middleware to alter the responses returned by /user or /user/:id? | I finally found out :
create the file src/extensions/user-permissions/strapi-server.js
This is mine. It registers a middleware for each of those endpoints:
GET /users/me : plugin::spiff-api.user-me
GET /users : plugin::spiff-api.user-find
GET /users/:id : plugin::spiff-api.user-find-one
("use strict");
module.exports = (plugin) => {
//if you see this, the configuration do loads:
console.log("Custom strapi-server.js for user-permissions");
//get api routes for 'user-permissions'
const apiRoutes = plugin.routes['content-api'].routes;
//add middleware for GET /users/me
apiRoutes
.filter(route => route.handler === 'user.me')
.map(route => {
route.config.middlewares = [
...(route.config.middlewares || []),
'plugin::spiff-api.user-me'//middleware name
];
return route;
});
//add middleware for GET /users/:id
apiRoutes
.filter(route => route.handler === 'user.findOne')
.map(route => {
route.config.middlewares = [
...(route.config.middlewares || []),
'plugin::spiff-api.user-find-one'//middleware name
];
return route;
});
//add middleware for GET /users
apiRoutes
.filter(route => route.handler === 'user.find')
.map(route => {
console.log(route)
route.config.middlewares = [
...(route.config.middlewares || []),
'plugin::spiff-api.user-find'//middleware name
];
return route;
});
return plugin;
};
Then, create your middlewares middleware, eg.
export default (config, { strapi })=> {
return async (ctx, next) => {
console.info("running middleware 'user-find-one.js'");
console.log();
//update your query here if needed
//eg. populate 'favoritePosts'
ctx.query.populate = {
...ctx.query.populate ?? {},
favoritePosts: {}
}
const controller = strapi.plugin('users-permissions').controller('user');
await controller.findOne(ctx);//this populates ctx.body
const response = ctx.body;
//update your response here if needed
ctx.body = response;
await next();//not sure why this stands for
}
}
|
76392112 | 76392173 | I'm trying to create a game in which, Starting of the game I have to ask the user for 'How many times he wants to play' and the user input must be an integer.
So I write the code I shared below. But it doesn't work as I expected.
If anyone can help me to figure out what I did wrong it will be helpful to me.
Thanks in advance.
let playCount = 0;
let regxNum = /^[0-9]+$/g; //Regular Expression to select all numbers between 0-9.
let checkPlayCount = 0;
let askPlayCount = () => { //Function to get game play count.
checkPlayCount = 0;
playCount = Number(prompt("How many times you want to play: ")); //Gets play count and convert into number;
checkPlayCount = Array.from(String(playCount), Number); //Converts number into array
}
askPlayCount();
//Code for validating input : must be number.
for(let i = 0; i < checkPlayCount.length; i++){
if(checkPlayCount[i] != regxNum){
console.log("Please enter valid input");
askPlayCount();
}
}
| Input validation with regular expression not working | The problem is you convert the number into the array, while you simply can do this by directly checking the input against regex. Here is my version of the problem
let playCount = 0;
let regxNum = /^[0-9]+$/; // Regular Expression to match all numbers between 0-9
let askPlayCount = () => {
playCount = Number(prompt("How many times do you want to play: "));
}
askPlayCount();
// Code for validating input: must be a number.
while (!regxNum.test(playCount)) {
console.log("Please enter a valid input");
askPlayCount();
}
|
76392174 | 76392197 | I noticed this difference when comparing xxhash implementations in both Python and Java languages. Calculated hashes by xxhash library is the same as hexadecimal string, but they are different when I try to get calculated hash as an integer(or long) value.
I am sure that this is some kind of "endian" problem but I couldn't find how to get the same integer values for both languages.
Any ideas how and why this is happening?
Java Code:
String hexString = "d24ec4f1a98c6e5b";
System.out.println(new BigInteger(hexString,16).longValue());
// printed value -> -3292477735350538661
Python Code:
hexString = "d24ec4f1a98c6e5b"
print(int(hexString, 16))
# printed value -> 15154266338359012955
| Java and Python return different values when converting the hexadecimal to long | You converted the BigInteger to long which is the reason for the difference. Because long is a signed 64-bits integer it overflows to a negative. If you just print the BigInteger as it is it gives the same result.
System.out.println(new BigInteger(hexString,16));
# 15154266338359012955
|
76388858 | 76390104 | library(terra)
library(RColorBrewer)
# sample polygon
p <- system.file("ex/lux.shp", package="terra")
p <- terra::vect(p)
# sample raster
r <- system.file("ex/elev.tif", package="terra")
r <- terra::rast(r)
r <- terra::crop(r, p , snap = "out", mask = T)
terra::plot(r,
col = brewer.pal(9, "pastel1"),
cex.main = 2,
smooth = T,
legend = T,
plg = list(title = "Score"),
axes = TRUE,
mar=c(3,3,3,6))
plot(p, add = T)
How do I change the size and orientation of the legend title 'Score'.
I want to orient the title so that it is vertical and follows along the
legend and also change the size of the legend title?
| change the size and orientation of legend title while plotting raster | You can add text whereever you want. For example
library(terra)
p <- terra::vect(system.file("ex/lux.shp", package="terra"))
r <- terra::rast(system.file("ex/elev.tif", package="terra"))
plot(r, mar=c(3,3,3,6))
text(x=6.77, y=49.95, "Score", srt=-90, cex=2, xpd=NA, pos=4)
lines(p)
|
76392129 | 76392209 | When I learned React to build virtual DOM,I use the editor on babel's website,Why are the two editors turning different code?
Why are the two editors turning different code?
| Why is babel home page different from the editor in the online tool | In the editor tool screenshot we can see that the selected option for the "React Runtime" configuration is "Automatic", while Babel's website editor looks like it's using the "Classic" option.
The difference between both is documented in https://babeljs.io/docs/babel-plugin-transform-react-jsx
There is also this blog post on React's blog, which explains the reasons behind.
|
76389865 | 76390685 | When working with ConstraintValidators, we just need to define the class and constraint and Spring is able to detect the validators and instantiate them automatically, injecting any beans we ask for in the constructor.
How do I add my custom Spring Validators (https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/validation/Validator.html) in the same manner?
| How to register custom Spring validator and have automatic dependency injection? | You have to let the controller to know which validator will validate your request. I use following code to find supported validators:
@ControllerAdvice
@AllArgsConstructor
public class ControllerAdvisor {
private final List<Validator> validators;
@InitBinder
public void initBinder(@NonNull final WebDataBinder binder) {
final Object request= binder.getTarget();
if (request== null) {
return;
}
this.validators.stream()
.filter(validator -> validator.supports(request.getClass()))
.forEach(binder::addValidators);
}
}
Note that you have to create those validators as beans. Therefore, you should annotate those validators with @Component. Hope this help you.
|
76388494 | 76390287 | In a Kivy form i set 2 Recycleviews(A and B). I would add the item clicked in Recycleview A, to the RecycleView B, using a Python3 script:
from functools import partial
from kivy.app import App
from kivy.lang.builder import Builder
from kivy.uix.recycleview import RecycleView
class B(RecycleView):
def __init__(self,**kwargs):
super(B, self).__init__(**kwargs)
def doet(self,x):
self.data.append({'text':x})
print(self.data) #it prints correctly, so why doesn't update?
class A(RecycleView):
def __init__(self,**kwargs):
super(A, self).__init__(**kwargs)
self.data=[{'text':'FROM HERE','on_press':partial(app.b.doet,'TO HERE')}]
class app(App):
b=B()
def build(self):
return Builder.load_file('lab.kv')
app().run()
'lab.kv':
BoxLayout:
A:
viewclass:'Button'
RecycleBoxLayout:
default_size: None, dp(56)
default_size_hint: 1, None
size_hint_y: None
height: self.minimum_height
orientation: 'vertical'
B:
viewclass:'Button'
RecycleBoxLayout:
default_size: None, dp(56)
default_size_hint: 1, None
size_hint_y: None
height: self.minimum_height
orientation: 'vertical'
My script correctly updates the Data's Dictionary, as I can see by printing it, but in RecycleView 'B' no items are phisically added.
| Python3 - Kivy RecycleView's data Dictionary succefully updated, but not phisically the Widget | The main problem with your code is that the line:
b=B()
is creating a new instance of B that is unrelated to the instance of B that is created in your kv. So anything done to that new instance of B will have no effect on what you see on the screen.
There are many possible approaches to do what you want. Here is one:
First, add an id to the B in your kv:
B:
id: b
Then, add a modify the App class:
class app(App):
def build(self):
root = Builder.load_file('lab.kv')
self.b = root.ids.b
return root
def doit(self, x):
self.b.doet(x)
The above build() method uses the new id to get a reference to the correct B instance, and saves that reference. The doit() method is just an intermediary to direct the call to the correct instance of B.
Then modify the A class to use this:
class A(RecycleView):
def __init__(self,**kwargs):
super(A, self).__init__(**kwargs)
self.data=[{'text':'FROM HERE','on_press':partial(App.get_running_app().doit,'TO HERE')}]
|
76390793 | 76392258 | I am trying to compare different where statements in my Kusto query using Kusto Explorer app, so I would like to export the result from the Query Summary tab.
In case has a query or a way to export this manually, it would be awesome.
Is it possible?
| How to export results from Query Summary from Kusto Explorer | In Kusto.Explorer, if you switch to the QueryCompletionInformation tab in the result set, you can see & copy the QueryResourceConsumption payload
|
76390320 | 76390714 | I am trying to move my bot written using discord.py/pycord to github for easier access, and I accidentally pushed my bot tokn to the hub, thankfully discord reset it for me and nothing hapened.
Now i want to use GitHub repository secrets to prevent this from happening, but i am having some trouble when trying to import the token into my code.
Here I've made a simple repo to experiment with this:
test.py:
import os
SECRET = os.environ['SECRET']
if SECRET == "TrustNo1":
print("No one can be trusted")
print(SECRET)
workflow.yml:
name: build bot.py
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: load content
uses: actions/checkout@v2
- name: load python
uses: actions/setup-python@v4
with:
python-version: '3.10' # install the python version needed
- name: start bot
env:
TOKEN: ${{ secrets.SECRET }}
run: python test.py
The following error occurs at the "start bot" step in the workflow:
Traceback (most recent call last):
File "/home/runner/work/test-repo/test-repo/test.py", line 2, in <module>
SECRET = os.environ['SECRET']
File "/opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/os.py", line 680, in __getitem__
raise KeyError(key) from None
KeyError: 'SECRET'
Error: Process completed with exit code 1.
If i try to echo the SECRET in the workflow.yml i get ***, so it has has acces to the token, but when it imports to python it all breaks down.
I'm still quite new to git and GitHub, so please don't use advanced terms.
| How do I include GitHub secrets in a python script? | - name: start bot
env:
TOKEN: ${{ secrets.SECRET }}
In the GitHub Action you named the variable secrets.SECRET but in environment variables you named it TOKEN. Either change the name of the environment variable to SECRET:
- name: start bot
env:
SECRET: ${{ secrets.SECRET }}
or change your code:
SECRET = os.environ['TOKEN']
|
76381802 | 76390352 | I'm using this html to have the user choose a color:
<input type="color" id="main-color" name="head" value="#15347d"">
Then I'm using a button to call my python function
button py-click="generate_color_schemes(False)" id="button-submit">Update!</button>
Works well. (generate_color_schemes() examines the Element("main-color").value to get the hex color.)
But I'd like to combine the two, so clicking on the input - opens a picker and fires the python function as the user clicks inside the color picker as well as when they leave.
But (as expected) adding py-click fires my function when the input is first clicked and not when the color chooser popup closes or as the user clicks inside the color picker.
I think I want pyclick to be triggering on the oninput event as well as the onchange event.
Is there a way to combine the input type='color' with py-click to get this behaviour ?
| Pyscript color input - triggers to soon | As you say, you've got the right syntax but the wrong event(s). Rather than listening for the click event, you can listen to the input event (which is dispatched every time the value of an input changes) or the change event (which is dispatched, in this case, when the color picker closes). To do this in PyScript (or to listen to any event), the HTML attribute is py-[event] where [event] is the type of the event. In this case, you'll use the py-input and py-change attributes.
Here's a working example in the current release of PyScript (2023.03.1), which I mention in case the events API changes in the future. The MDN docs have a longer example in JavaScript.
<!-- Written for PyScript 2023.03.1 -->
<input type="color" id="main-color" py-input="do_something_with_color()" py-change="do_something_else()">
<py-script>
import js
def do_something_with_color():
elem = js.document.getElementById("main-color")
value = elem.value
print(f'Chosen color: {value}') # do something here
def do_something_else():
elem = js.document.getElementById("main-color")
value = elem.value
print(f'The final color chosen was: {value}') # do something here
</py-script>
|
76389003 | 76390723 | I have a compute shader which updates a storage image which is later sampled by a fragment shader within a render pass.
From khronos vulkan synchronization examples I know I can insert a pipeline barrier before the render pass to make sure the fragment shader samples the image without hazards. Note the example is modified slightly to include more draw calls.
vkCmdDispatch(...); // update the image
VkImageMemoryBarrier2KHR imageMemoryBarrier = {
...
.srcStageMask = VK_PIPELINE_STAGE_2_COMPUTE_SHADER_BIT_KHR,
.srcAccessMask = VK_ACCESS_2_SHADER_WRITE_BIT_KHR,
.dstStageMask = VK_PIPELINE_STAGE_2_FRAGMENT_SHADER_BIT_KHR,
.dstAccessMask = VK_ACCESS_2_SHADER_READ_BIT_KHR,
.oldLayout = VK_IMAGE_LAYOUT_GENERAL,
.newLayout = VK_IMAGE_LAYOUT_READ_ONLY_OPTIMAL
/* .image and .subresourceRange should identify image subresource accessed */};
VkDependencyInfoKHR dependencyInfo = {
...
1, // imageMemoryBarrierCount
&imageMemoryBarrier, // pImageMemoryBarriers
...
}
vkCmdPipelineBarrier2KHR(commandBuffer, &dependencyInfo);
... // Render pass setup etc.
vkCmdDraw(...); // does not sample the image
vkCmdDraw(...); // does not sample the image
vkCmdDraw(...); // does not sample the image
...
vkCmdDraw(...); // sample the image written by the compute shader, synchronized.
In the example, I have a bunch of draw calls within the same render pass that do not need the synchronization with the compute shader. They merely render a static geometry / textures which do not update dynamically. Yet in this configuration they must wait for the compute shader.
Ideally, I would like the independent draw calls between the vkCmdDispatch call and last vkCmdDraw to be able to run concurrently.
If I understand the spec correctly, I can't put the same pipeline barrier within the render pass.
Another alternative I considered is to use external subpass dependencies and record the draw call which samples the texture in a second subpass. But I don't know if this is a valid approach, and in any case it will be hard to maintain as this configuration is hard coded into the renderpass object.
So Is there a different synchronization approach that can achieve better concurrency?
| How to synchronize a draw call with a dispatch call as late as possible? | You should put that in a subpass external dependency for the subpass you need to use it within. However, unless the rendering commands you want to overlap with the compute shader are in a prior subpass in the dependency graph, this probably won't give you any greater performance.
Note that not even dynamic rendering helps you here, as vkCmdBeginRendering starts a render pass instance. This means that you still can't have pipeline barriers or events within them.
Essentially, collective rendering operations (either the render pass as a whole or subpasses within it) defines an inflexible synchronization boundary between themselves as a group and the outside world. You can put synchronization around them, but not within them.
That being said, since rendering and compute operations are both eating up the same resources (shader stages), you probably weren't going to get too much overlap anyway.
|
76392154 | 76392264 | Description:
I have a problem very similar to https://leetcode.com/problems/jump-game-ii/description/.
Apart from discovering the minimum steps to reach the end, I need to retrieve the winning path array.
The winning path that is needed for my problem must satisfy the rule that given path A like [0,2,4,7] there are not any other paths with the same price (steps to the end, 3 in this case) whose nodes starting from the path's end has lower indexes.
The single step from i to i+1 has not price equal to 1 but is 0 <= x <= 2^32-1.
An example:
[0,2,4,5,7] is the winner
[0,1,4,6,7] is not the winner because 6 > 5 at index 3
This has to work also in reverse:
[7,6,3,1,0] is the winner
[7,5,4,2,0] is not the winner because 2 > 1 at index 3
Note that this rule is applied from right to left.
Clearly, I can produce all the paths with BFS and hence compare them but is there a more efficient way?
| Find BFS best path array |
The winning path that is needed for my problem must satisfy the rule that given path A like [0,2,4,7] there are not any other paths with the same price (steps to the end, 3 in this case) whose nodes starting from the path's end has lower indexes.
I can produce all the paths with BFS and hence compare them but is there a more efficient way?
Yes.
Search backwards, from end to start, in BFS fashion. When you enqueue each node's unvisited successors, do it in increasing order by index. The first path you discover will then be the reverse of the path you're looking for.
|
76390601 | 76390737 | I am new to Spring Boot and I'm getting the following error when writing a login validationAPI:
Field userservice in com.Productwebsite.controller.UserController required a bean of type 'com.Productwebsite.service.UserService' that could not be found.
The injection point has the following annotations:
- @org.springframework.beans.factory.annotation.Autowired(required=true)
Controller class:
package com.Productwebsite.controller;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RestController;
import com.Productwebsite.service.UserService;
@RestController
public class UserController {
@Autowired
private UserService userservice;
@GetMapping("user/{username}/{password}")
public int UserLogin(@PathVariable("username")String username1, @PathVariable("password")String password1) {
int flag = userservice.loginValidation(username1, password1);
if (flag == 0) {
return 0;
}
return flag;
}
}
Modelclass:
package com.Productwebsite.model;
public class users {
String username;
String password;
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
@Override
public String toString() {
return "users [username=" + username + ", password=" + password + "]";
}
public users(String username, String password) {
super();
this.username = username;
this.password = password;
}
public users() {
super();
// TODO Auto-generated constructor stub
}
}
Service interfface:
package com.Productwebsite.service;
import org.springframework.stereotype.Repository;
@Repository
public interface UserService {
public int loginValidation(String username,String password);
}
Serviceimplementation class:
package com.Productwebsite.serviceImp;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import org.springframework.stereotype.Service;
import com.Productwebsite.Dbutil.dbutil;
import com.Productwebsite.service.UserService;
@Service
public class ServiceImp implements UserService {
Connection connection;
int flag = 0;
public ServiceImp() throws SQLException {
connection = dbutil.getConnection();
}
@Override
public int loginValidation(String username, String password) {
try {
PreparedStatement statement
= connection.prepareStatement("SELECT * FROM users WHERE username='"+username+"'");
ResultSet rs=statement.executeQuery();
while(rs.next()) {
if (rs.getString(1).equals(username)&&rs.getString(2).equals(password)) {
flag=1;
}
else {
System.out.println("Invalid username/password");
}
}
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return flag;
}
}
| spring boot application not working bean error | Add @ComponentScan to com.Productwebsite and restart the app.
@ComponentScan(basePackages = "com.Productwebsite")
@SpringBootApplication
public class YourMainClass{
...
}
Try the following, if the above changes do not work:
Change annotation by adding the bean name:
@Repository("userservice")
|
76388789 | 76390508 | Im using renderer function as below and want to hide/show specific checkcolumn-cells depending on variable record.data.hidden in my gridview.
{
xtype: 'checkcolumn',
renderer: function(value, metaData, record, rowIndex, colIndex, store, view) {
/*
if (record.data.hidden === true) {
// checkbox hidden
} else {
// checkbox shown
}
*/
},
itemId: 'mycheck',
bind: {
text: '{warranty}'
}
}
How do i do this?
| How to hide/show specific Cell in Grid | You can use the metaData passed to the renderer function to apply styling to the cell element, see the documentation here.
One easy way is to set the display CSS property depending on your criteria. This will be applied to the HTML <td> element created by Ext JS for the cell.
Try this:
{
xtype: 'checkcolumn',
renderer: function(value, metaData, record, rowIndex, colIndex, store, view) {
if (record.data.hidden === true) {
metaData.tdStyle = 'display: none;';
} else {
metaData.tdStyle = 'display: table-cell;';
}
return value;
},
itemId: 'mycheck',
bind: {
text: '{warranty}'
}
}
(I am not sure about the right syntax, maybe you don't need the semicolon after none and table-cell, and maybe you have to use quotations somewhere. Try it.)
But you create a CSS style and use tdCls as well, and perhaps also tdAttr but I am not sure about the latest.
|
76392232 | 76392265 | I'm trying to create a generic component in React TypeScript. Here is the code
interface Props<T> {
name: T;
}
function CheckChild<T>(props: Props<T>) {
return <div>My name is {props.name}</div>;
}
export default function Check() {
const name = "Alex";
return (
<div>
<CheckChild name={name} />
</div>
);
}
VS code gives me a type error about props.name:
Type 'T' is not assignable to type 'ReactNode'.
Type 'T' is not assignable to type 'ReactPortal'.ts(2322)
Check2.tsx(7, 21): This type parameter might need an `extends React.ReactPortal` constraint.
Check2.tsx(7, 21): This type parameter might need an `extends React.ReactNode` constraint.
But if I wrap props.name in JSON.stringify() - type error disappears.
return <div>My name is {JSON.stringify(props.name)}</div>;
Why does this happen?
This topic does not answer my question. I am interested in "Why JSON.stringify removes error"?
| Generic Component in React - Type is not assignable to type 'ReactNode'. But no error with JSON.stringify() | The reason why the type error disappears when you wrap props.name in JSON.stringify() is because JSON.stringify() converts the value to a string representation. In TypeScript, the type ReactNode represents the type of a valid React node, which can be a string, a number, a React component, an array, etc.
When you directly use props.name without JSON.stringify(), the TypeScript compiler assumes that the type of props.name should be compatible with ReactNode. However, in your code, you have a generic type T for the name property in the Props interface. The compiler doesn't have enough information to determine the actual type of T, so it gives a type error.
By using JSON.stringify(props.name), you explicitly convert the value to a string, which satisfies the ReactNode type requirement. This makes the type error disappear because JSON.stringify() always returns a string, and a string is a valid type for ReactNode.
If you want to avoid using JSON.stringify(), you can provide a more specific type constraint for the T generic in the Props interface. For example, if you know that props.name will always be a string, you can update the Props interface like this:
interface Props<T extends string> {
name: T;
}
|
76388077 | 76390533 | I want to get the data where I marked. But since I made the side collection a custom name (as you can see in the picture TbarRow and latpuldown), I cannot fetch the last data I want to reach. Actually, when I write the name of the collection by hand, I reach it, but I don't know how to fetch that name without writing it myself.
I created an array with the name of gunharaketisimleri of the documentation (4 documents Karın bacak itiş çekiş) and tried to fetch the data from there.
and try it like this:
Future<void> fetchData() async {
try {
var snapshot = await FirebaseFirestore.instance
.collection("günler")
.doc(currentUserID + " GUN")
.collection("gün")
.doc("itiş")
.get();
snapshot.data()!.forEach((key, value) {
List<dynamic> favorites1 = [];
favorites1.add(value);
print(favorites1);
});
} catch (e) {
// Handle any errors
print(e.toString());
}
}
and output:
I/flutter (24013): [[TbarRow, latpuldown]]
I/flutter (24013): [itiş]
I/flutter (24013): [null]
I/flutter (24013):[gdc0mIEbwmdVFqZoLDC0gnIFCAy2]
I tried to reach it this way and got stuck here. Can you suggest another method or how should the continuation be from the final version?
| Fetch data Firestore Flutter | To get the array from the document, would be something like:
var array = snapshot.get('gunharaketisimleri') as List;
Then you can loop over the list to get the individual values.
|
76389459 | 76390741 | I am having real issues stubbing one particular thing using sinon. I have a simple function I am testing
const floatAPIModels = require("models/float/floatAPIModels");
const insertFloatData = async (databaseConnection, endpoint, endpointData) => {
try {
const floatModel = floatAPIModels(databaseConnection);
await databaseConnection.sync();
if (endpoint === "people") {
endpointData.forEach(async (record) => {
await floatModel.Person.upsert(record);
});
}
return true;
} catch (error) {
console.log("Unable to insert data into the database:", error);
return error;
}
};
The problem is with floatAPIModels being an Object that returns things. My implementation is this
const { DataTypes } = require("sequelize");
const floatAPIModels = (sequelize) => {
const Person = sequelize.define(
"Person",
{
people_id: { type: DataTypes.INTEGER, primaryKey: true },
job_title: { type: DataTypes.STRING(200), allowNull: true },
employee_type: { type: DataTypes.BOOLEAN, allowNull: true },
active: { type: DataTypes.BOOLEAN, allowNull: true },
start_date: { type: DataTypes.DATE, allowNull: true },
end_date: { type: DataTypes.DATE, allowNull: true },
department_name: { type: DataTypes.STRING, allowNull: true },
default_hourly_rate: { type: DataTypes.FLOAT, allowNull: true },
created: { type: DataTypes.DATE, allowNull: true },
modified: { type: DataTypes.DATE, allowNull: true },
},
{
timestamps: true,
tableName: "Person",
}
);
return {
Person,
};
};
module.exports = floatAPIModels;
I have removed some things to cut down on code. At the moment I am doing something like this
const { expect } = require("chai");
const sinon = require("sinon");
const floatAPIModels = require("src/models/float/floatAPIModels");
const floatService = require("src/services/float/floatService");
describe("insertFloatData", () => {
let databaseConnection;
let floatModelMock;
beforeEach(() => {
databaseConnection = {};
floatModelMock = {
Person: { upsert: sinon.stub().resolves() },
};
sinon.stub(floatAPIModels, "Person").returns(floatModelMock.Person);
});
afterEach(() => {
sinon.restore();
});
it("should insert endpointData into the 'people' endpoint", async () => {
const endpoint = "people";
const endpointData = [{ record: "data" }];
await floatService.insertFloatData(databaseConnection, endpoint, endpointData);
expect(floatModelMock.Person.upsert.calledOnce).to.be.true;
expect(floatModelMock.Person.upsert.firstCall.args[0]).to.deep.equal(endpointData[0]);
});
});
With the above, I get
TypeError: Cannot stub non-existent property Person
But I have tried default, and a lot of other ways, but none of them seems to work.
How can I properly stub this and get the unit test working?
Thanks
| Stubbing Models contained within an Object | floatAPIModels is a function that returns { Person } object. There is no Person property on this function. That's why you got the error.
In order to stub the floatAPIModels function, I will use the proxyquire module to do this.
E.g.
model.js:
const { DataTypes } = require("sequelize");
const floatAPIModels = (sequelize) => {
const Person = sequelize.define(
"Person",
{
people_id: { type: DataTypes.INTEGER, primaryKey: true },
// rest fields, don't matter for this test
// ...
},
{ timestamps: true, tableName: "Person", }
);
return {
Person,
};
};
module.exports = floatAPIModels;
service.js:
const floatAPIModels = require("./model");
const insertFloatData = async (databaseConnection, endpoint, endpointData) => {
try {
const floatModel = floatAPIModels(databaseConnection);
await databaseConnection.sync();
if (endpoint === "people") {
endpointData.forEach(async (record) => {
await floatModel.Person.upsert(record);
});
}
return true;
} catch (error) {
console.log("Unable to insert data into the database:", error);
return error;
}
};
module.exports = { insertFloatData }
service.test.js:
const sinon = require("sinon");
const proxyquire = require('proxyquire');
describe("insertFloatData", () => {
let databaseConnection;
beforeEach(() => {
databaseConnection = {
define: sinon.stub(),
sync: sinon.stub()
};
});
afterEach(() => {
sinon.restore();
});
it("should insert endpointData into the 'people' endpoint", async () => {
const endpoint = "people";
const endpointData = [{ record: "data" }];
const PersonStub = {
upsert: sinon.stub()
}
const floatAPIModelsStub = sinon.stub().returns({ Person: PersonStub })
const floatService = proxyquire('./service', {
'./model': floatAPIModelsStub
})
await floatService.insertFloatData(databaseConnection, endpoint, endpointData);
sinon.assert.calledOnce(PersonStub.upsert)
sinon.assert.match(PersonStub.upsert.firstCall.args[0], endpointData[0])
});
});
Test result:
insertFloatData
✓ should insert endpointData into the 'people' endpoint (4168ms)
1 passing (4s)
------------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
------------|---------|----------|---------|---------|-------------------
All files | 76.47 | 50 | 66.67 | 76.47 |
model.js | 60 | 100 | 0 | 60 | 4-24
service.js | 83.33 | 50 | 100 | 83.33 | 14-15
------------|---------|----------|---------|---------|-------------------
|
76391788 | 76392266 | Write a program that determines for two nodes of a tree whether the first one is a parent of another.
Input data
The first line contains the number of vertices n (1 ≤ n ≤ 10^5) in a tree. The second line contains n numbers, the i-th one gives the vertex number of direct ancestor for the vertex i. If this value is zero, then the vertex is a root of a tree.
The third line contains the number of queries m (1 ≤ m ≤ 10^5). Each of the next m lines contains two different numbers a and b (1 ≤ a, b ≤ n).
Output data
For each of the m queries print on a separate line number 1, if the vertex a is one of the parents of a vertex b, and 0 otherwise.
Input example #1
6
0 1 1 2 3 3
5
4 1
1 4
3 6
2 6
6 5
Output example #1
0
1
1
0
0
Problem
How can I fix the time limit?
I also tried with <stdio.h>(scanf();printf()).
But problem gives time limit.
It gives time limit for input(many input things)
My solution:
#include<iostream>
#include<vector>
#include<algorithm>
using namespace std;
vector<vector<int>> g;
vector<int> time_in,time_out;
int Time=0;
void dfs(int node,int parent){
time_in[node]=++Time;
for(int &to:g[node]){
if(to!=parent){
dfs(to,node);
}
}
time_out[node]=++Time;// or time_out[node]=Time;
}
bool isAnchestor(int anch,int node){
return time_in[anch]<=time_in[node] and time_out[anch]>=time_out[node];
}
void readInt(int &n){
char ch;
int sign = 1;
while(ch = getchar(), isspace(ch)){//getchar()-->getchar_unlocked()
};
n = 0;
if(ch == '-')
sign = -1;
else n = ch - '0';
while(ch = getchar(), isdigit(ch))
n = (n << 3) + (n << 1) + ch - '0';
n *= sign;
}
int main(){
ios_base::sync_with_stdio(0);
//cin.tie(0);
cout.tie(0);
int n,start;
readInt(n);
time_in.resize(n+1);
time_out.resize(n+1);
g.resize(n+1);
vector<int> anchestors(n+2);
for(int i=1; i<=n; ++i){// I am thinking: the anchestors is sorted.
readInt(anchestors[i]);
if(anchestors[i] != 0) {
g[anchestors[i]].push_back(i);
g[i].push_back(anchestors[i]);
g[anchestors[i]].push_back(i);
}else{
start=i;
}
}
dfs(start,1);
int q,u,v;
readInt(q);
while(q--){
readInt(u),readInt(v);
cout<<isAnchestor(u,v)<<'\n';// The anchestor of v is u. //(u,v)-->(a,b)
}
return 0;
}
| How do I resolve the time limit? | Your approach using a depth first search is inefficient. This in general will look at more than just the path from b to a though the "parent pointers" or to the root, if b is not a descendant of a.
Instead start the search at b and go to the parent node repeatedly until you find the root or a. This will only consider the path from b to the root node. You don't need to consider any alternative paths, but you always know the edge to traverse.
#include <iostream>
#include <vector>
int main()
{
constexpr int RootParent = -1;
int n;
std::cin >> n;
std::vector<int> parents;
parents.reserve(n);
for (int i = 0; i != n; ++i)
{
int v;
std::cin >> v;
parents.push_back(v - 1); // note: storing 0-based indices of parents here (root's parent becomes -1)
}
int queryCount;
for(std::cin >> queryCount; queryCount > 0; --queryCount)
{
int searchedAncestor;
int descendant;
std::cin >> searchedAncestor >> descendant;
// inputs are still 1-based, so we need to decrement here
--searchedAncestor;
--descendant;
// repeatedly go to the parent node until we reach the root or the ancestor to attempt to find
while ((searchedAncestor != descendant) && (descendant != RootParent))
{
descendant = parents[descendant];
}
std::cout << (searchedAncestor == descendant) << '\n';
}
}
Demo on godbolt
|
76387967 | 76390557 | I have this html/JS code that generates N tables based on user input. I'm trying to separate the content in pages of "A4" size and with no more that 6 tables within each page in order to give the option to print as PDF.
I'm using the built-in JS function window.print(). The issue is that when the browser opens up the printing preview window, only detects 2 pages and only first page content appears correct, the rest appears empty.
You can try for example to generate 24 tables (that would be 4 pages) and when click on print, only 2 pages are detected.
Basically I'm using this to convert as PDF the content generated by the other 2 JS codes.
<button onclick="printPDF()">Print PDF</button>
function printPDF() {
window.print();
}
How to fix this, in order to print all pages? Thanks
| How to print as PDF all pages in HTML? | One problem is the hidden page overspill
so simply remove that such that all contents are eligible for printing. Note you may want to move hidden to just the button at print time.
And the 2nd is you may need a page-break-after: always;
when the table container blocks reach 6 to a page limit, so try add that in. Note even with all 6 it will not add a blank page unless the table is too close to the edge so it overspills, hence the 7th (13th ... etc.) triggers the page break.
Later Edit I seems that an auto page-break is added in the fiddle so may not be needed? However I also had issues with FireFox assessment of page height so for my MS printing had to reduce between 3-6mm to not trigger an extra page!
https://jsfiddle.net/61qshzvc/
|
76390332 | 76390748 | Problem statement:
I want to experiment with different sequences for the routing through the selectOutput5 block.
Example:
Random (all 5 Outputs can be choosen by probability 0.2) (thats the easy part that i solved)
Fixed: (Output1: 6 agents, after those six agents, Output2: six agents and so on...
Do you have an idea on how i can do this based on expressions or conditions?
Thank you very much and have a great weekend
| Fixed Output sequence for selectOutput5 | Set the "Use:" property in your SelectOutput5 block to "Exit number". Then you can define which exit to use.
For your example, I would create an int type variable named currExit and set the initial value to 1. Set the value of "Exit number [1..5]:" in your SelectOutput5 block to currExit and write this code to the "On enter:" property:
if (self.in.count()%6==0) {
currExit++;
if (currExit>5) currExit=1;
}
This switches to the next exit for every 6th agent (self.in.count() returns the number of agents that have entered the block so far).
|
76388627 | 76390717 | I have the following JDBC input:
input {
jdbc {
id => "mypipeline"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_driver_library => "/usr/share/logstash/drivers/mysql-connector-j-8.0.32.jar"
jdbc_connection_string => "my-connection-string"
jdbc_password => "my-user"
jdbc_user => "my-pass"
jdbc_fetch_size => 5000
schedule => "*/5 * * * *"
statement => "SELECT l.id, l.name, l.log_date FROM logs l WHERE l.created_at >= :sql_last_value"
}
}
The log_date column is as follows:
log_date DATE NULL DEFAULT NULL,
Warning; created_at and log_date are not the same column. The query that's being executed is as follows:
SELECT l.id, l.name, l.log_date FROM logs l WHERE l.created_at >= '1970-01-01 02:00:00'
which is wrong, since Istanbul does not go into DST, should be '1970-01-01 03:00:00' For values such as 1949-04-10 for log_date, I am receiving this error:
[2023-06-02T11:50:30,470][WARN ][logstash.inputs.jdbc ][main][mypipeline] Exception when executing JDBC query {:exception=>Sequel::DatabaseError, :message=>"Java::OrgJodaTime::IllegalInstantException: Illegal instant due to time zone offset transition (daylight savings time 'gap'): 1949-04-10T00:00:00.000 (Europe/Istanbul)", :cause=>"#<Java::OrgJodaTime::IllegalInstantException: Illegal instant due to time zone offset transition (daylight savings time 'gap'): 1949-04-10T00:00:00.000 (Europe/Istanbul)>"}
How do I resolve this?
| Exception when executing JDBC query - Illegal instant due to time zone offset transition with Logstash | "Illegal instant due to time zone offset transition" - Istanbul does not currently change between daylight saving time and standard time, but it has done in the past.
It is currently observing what is effectively DST year-round - and has done so since March 27th 2016, when the clocks moved forward by 1 hour from a UCT timezone offset of +2 hours to +3 hours.
More specifically regarding the "Illegal instant" error: In 1949 on 10th April at midnight, the clocks moved forward by 1 hour - so the local time of 1949-04-10 00:00 never actually happened. That is why it is a "gap" value, as noted in the error message.
This time zone data is captured in the IANA Time Zone Database (TZDB), which I assume is what your application uses, behind the scenes (e.g. if you are using Joda-Time).
You can use an online tool such as this one (and probaby others):
WARNING - I cannot vouch for the accuracy of the data on this web site, but it does match the TZDB rule for this specific example, which I extracted as follows using Java (which uses TZDB data):
- on 1949-04-10 at 00:00
- the clocks moved forward by 1 hr (daylight saving)
- from TZ offset +02:00 to offset +03:00
Regarding your comment:
"should be '1970-01-01 03:00:00'"
For 1970, the TZDB indicates that there were no adjustments made. So, the effective offset from UTC was the one previously made in 1964:
- on 1964-10-01 at 00:00
- the clocks moved back by 1 hr
- from TZ offset +03:00 to offset +02:00
And that +02:00 is what you are (correctly) seeing in your data for that 1970 datetime.
If you want to avoid using a datetime which falls into one of these gaps (caused by the clocks moving forward), then you can do that programmatically - for example:
Assuming Java (since you mention Joda-Time): java.time discovering I'm in the daylight savings "gap"
Other mainstream languages should have similar capabilities.
Also, since you mentioned Joda, maybe you can consider using Java's java.time classes now (if you have a suitable version of Java), instead of using Joda-Time:
Note that from Java SE 8 onwards, users are asked to migrate to java.time (JSR-310) - a core part of the JDK which replaces this project.
Specific solution to the case with Logstash
Adding jdbc_default_timezone => "GMT" to Logstash configuration and altering the timezone of the host machine will make Logstash to query the database without getting this error.
|
76389709 | 76390777 | I have a Power BI table with a column called "hours". It can have different time records with following different formats:
PT5H15M{YearMonthDayTime}, PT0S{YearMonthDayTime}, PT5H15M, PT10H, etc.
How can I clean them up so that the hours are represented as numbers, for example, PT5H15M{YearMonthDayTime} would be 5,25 and PT3H30M would be 3,5.
Can't find any easy way to filter the column since some rows have {YearMonthDayTime} ending and others doesn't. I don't want to transform every record manually.
Thanks already!
| Clean column with different time records in powerbi | You can achieve this in power query
Steps followed in Power Query
Extract Text between delimiters PT and H to populate Hours column. Change type to decimal
Extract Text between delimiters H and M to populate Minutes column. Change type to decimal
Replace Errors with zero. Replace all null values with zero.
Devide Minutes column by 60 to convert into hours.
Add Hours column in step 1) to hours column in step 4)
Convert result column type to Text
In result column: Replace . with , using Transform -> Replace Values
M code:
let
Source = Excel.Workbook(File.Contents("C:\Ashok\Power BI\Stack Overflow\Data_02_jun2_2023.xlsx"), null, true),
Data_Sheet = Source{[Item="Data",Kind="Sheet"]}[Data],
#"Changed Type" = Table.TransformColumnTypes(Data_Sheet,{{"Column1", type text}}),
#"Promoted Headers" = Table.PromoteHeaders(#"Changed Type", [PromoteAllScalars=true]),
#"Changed Type1" = Table.TransformColumnTypes(#"Promoted Headers",{{"Hours", type text}}),
#"Inserted Text Between Delimiters" = Table.AddColumn(#"Changed Type1", "Text Between Delimiters", each Text.BetweenDelimiters([Hours], "PT", "H"), type text),
#"Inserted Text Between Delimiters1" = Table.AddColumn(#"Inserted Text Between Delimiters", "Text Between Delimiters.1", each Text.BetweenDelimiters([Hours], "H", "M"), type text),
#"Changed Type2" = Table.TransformColumnTypes(#"Inserted Text Between Delimiters1",{{"Text Between Delimiters.1", type number}}),
#"Inserted Division" = Table.AddColumn(#"Changed Type2", "Division", each [Text Between Delimiters.1] / 60, type number),
#"Changed Type3" = Table.TransformColumnTypes(#"Inserted Division",{{"Text Between Delimiters", type number}}),
#"Replaced Errors" = Table.ReplaceErrorValues(#"Changed Type3", {{"Text Between Delimiters", 0}}),
#"xyz" = Table.TransformColumns(#"Replaced Errors", {{"Text Between Delimiters", each if _ is null then 0 else _},
{"Text Between Delimiters.1", each if _ is null then 0 else _}, {"Division", each if _ is null then 0 else _}}),
#"Changed Type4" = Table.TransformColumnTypes(xyz,{{"Text Between Delimiters", type number}, {"Division", type number}}),
#"Inserted Addition" = Table.AddColumn(#"Changed Type4", "Addition", each [Text Between Delimiters] + [Division], type number),
#"Changed Type5" = Table.TransformColumnTypes(#"Inserted Addition",{{"Addition", type text}}),
#"Replaced Value" = Table.ReplaceValue(#"Changed Type5",".",",",Replacer.ReplaceText,{"Addition"})
in
#"Replaced Value"
|
76391920 | 76392279 | So this was the question and I implemented it using python language but there is error in code and I am not able to fix it so please help. I am attaching code also.
def delete23(a):
for i in range(0, len(a) - 1):
for j in range(0, len(a) - 1):
if a[i] == a[j] and i != j:
a.pop(j)
print(a)
a = [1, 2, 3, 4, 4, 1, 7]
print(len(a))
delete23(a)
| Write a Python program to remove duplicates from a list of integers, preserving order | To filter duplicates while preserving the order, a set of seen values is used quite often. The logic is very simple: if a value is not present yet in seen, the value is encountered for the first time.
lst = [1, 2, 3, 4, 4, 1, 7]
seen = set()
lst2 = []
for x in lst:
if x not in seen:
seen.add(x)
lst2.append(x)
print(lst2)
With a little trick, this can be shortened:
seen = set()
lst2 = [(seen.add(x) or x) for x in lst if x not in seen]
print(lst2)
To explain the expression inside (...): set.add returns None and (None or VALUE) is evaluated to VALUE
Warning: ALL solutions using a set or a dict require that all values are hashable.
|
76388344 | 76390724 | The below code generates a plot and 4PL curve fit, but the fit is poor at lower values. This error can usually be addressed by ading a 1/y^2 weighting, but I dont know how to do it in this instance. Adding sigma=1/Y_data**2 to the fit just makes it worse.
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
def fourPL(x, A, B, C, D):
return ((A-D) / (1.0 + np.power(x / C, B))) + D
X_data = np.array([700,200,44,11,3,0.7,0.2,0])
Y_data = np.array([600000,140000,30000,8000,2100,800,500,60])
popt, pcov = curve_fit(fourPL, X_data, Y_data)
fig, ax = plt.subplots()
ax.scatter(X_data, Y_data, label='Data')
X_curve = np.linspace(min(X_data[np.nonzero(X_data)]), max(X_data), 5000)
Y_curve = fourPL(X_curve, *popt)
ax.plot(X_curve, Y_curve)
ax.set_xscale('log')
ax.set_yscale('log')
plt.show()
| Apply a weighting to a 4 parameter regression curvefit | Don't add inverse square weights; fit in the log domain. Always add bounds. And in this case, curve_fit doesn't do a very good job; consider instead minimize.
import numpy as np
from scipy.optimize import curve_fit, minimize
import matplotlib.pyplot as plt
def fourPL(x: np.ndarray, a: float, b: float, c: float, d: float) -> np.ndarray:
return (a - d)/(1 + (x / c)**b) + d
def estimated(x: np.ndarray, a: float, b: float, c: float, d: float) -> np.ndarray:
return np.log(fourPL(x, a, b, c, d))
def sqerror(abcd: np.ndarray) -> float:
y = np.log(fourPL(x_data, *abcd)) - np.log(y_data)
return y.dot(y)
x_data = np.array([700, 200, 44, 11, 3, 0.7, 0.2, 0])
y_data = np.array([600000, 140000, 30000, 8000, 2100, 800, 500, 60])
guess = (500, 1.05, 1e6, 1e9)
bounds = np.array((
(1, 0.1, 1, 0),
(np.inf, 10, np.inf, np.inf),
))
popt, _ = curve_fit(
f=estimated, xdata=x_data, ydata=np.log(y_data), p0=guess,
bounds=bounds,
)
print('popt:', popt)
result = minimize(
fun=sqerror, x0=guess, bounds=bounds.T, tol=1e-9,
)
assert result.success
print('minimize x:', result.x)
x_curve = 10**np.linspace(-1, 3, 1000)
fig, ax = plt.subplots()
ax.scatter(x_data, y_data, label='Data')
ax.plot(x_curve, fourPL(x_curve, *popt), label='curve_fit')
ax.plot(x_curve, fourPL(x_curve, *result.x), label='minimize')
ax.plot(x_curve, fourPL(x_curve, *guess), label='guess')
ax.set_xscale('log')
ax.set_yscale('log')
ax.legend()
plt.show()
|
76392252 | 76392286 | I am currently developing a multi-threaded application in C++ where different threads are expected to process data from a shared data structure. I'm aware that the standard library provides std::future and std::async to easily handle asynchronous operations, and I'm trying to use these in my application.
Here's a simplified sketch of my code:
#include <vector>
#include <future>
std::vector<int> shared_data;
// Some function to be executed asynchronously
void process_data(size_t start, size_t end) {
for (size_t i = start; i < end; ++i) {
// Do something with shared_data[i]
}
}
int main() {
std::future<void> fut1 = std::async(std::launch::async, process_data, 0, 10);
std::future<void> fut2 = std::async(std::launch::async, process_data, 10, 20);
// Other operations...
return 0;
}
I have the following questions regarding this code:
Since shared_data is being accessed by multiple threads, do I need to protect it with a std::mutex or other synchronization primitives?
Is there a way to pass std::future objects to other functions or store them in a data structure, and what would be the potential implications of doing so?
How can I handle exceptions thrown by the process_data function and propagated through the std::future objects?
Any guidance or best practices related to the usage of std::future in multithreaded scenarios would be greatly appreciated.
In order to make the shared data access thread-safe, I attempted to introduce an std::mutex and lock it using std::lock_guard in the process_data function like so:
std::mutex mtx;
void process_data(size_t start, size_t end) {
std::lock_guard<std::mutex> lock(mtx);
for (size_t i = start; i < end; ++i) {
// Do something with shared_data[i]
}
}
I also attempted to store std::future objects in a std::vector for later use, and tried to handle exceptions using a try/catch block around the get() function of std::future.
I was expecting that locking the std::mutex would ensure that only one thread can access the shared data at a time, preventing race conditions. I also expected that I would be able to easily store the std::future objects in a vector and handle exceptions from the asynchronous tasks.
However, I'm unsure if these methods are the most efficient or even correct, given the lack of detailed examples or guidelines on these topics in the documentation and tutorials I've found. I'm particularly interested in understanding the correct way to use std::future and std::async in more complex scenarios, and how to handle exceptions properly in this context.
| Multithreading with std::future in C++: Accessing shared data | If the data is readonly (and its not too much, just copy it).
Otherwise make a shared_ptr to your data (and using a lambda expression) you can capture the shared_ptr by value (! not reference!!!) This will extend the lifetime of the data to the lifetime of the thread that uses it longest. So something like this :
std::shared_ptr<SharedData> data; auto future = std::async(std::launch::async( [data]{ process_data(data); };
If the data is read/write then add a mutex or some other synchronization mechanism to your data class and use getters/setters with lock to update/read the values in the data.
|